00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 602 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3268 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.159 Using shallow fetch with depth 1 00:00:00.159 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.159 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.241 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.251 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.262 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.262 > git config core.sparsecheckout # timeout=10 00:00:06.271 > git read-tree -mu HEAD # timeout=10 00:00:06.286 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.304 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.304 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.425 [Pipeline] Start of Pipeline 00:00:06.441 [Pipeline] library 00:00:06.442 Loading library shm_lib@master 00:00:06.443 Library shm_lib@master is cached. Copying from home. 00:00:06.462 [Pipeline] node 00:00:06.477 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.479 [Pipeline] { 00:00:06.488 [Pipeline] catchError 00:00:06.489 [Pipeline] { 00:00:06.501 [Pipeline] wrap 00:00:06.509 [Pipeline] { 00:00:06.516 [Pipeline] stage 00:00:06.517 [Pipeline] { (Prologue) 00:00:06.533 [Pipeline] echo 00:00:06.534 Node: VM-host-SM16 00:00:06.539 [Pipeline] cleanWs 00:00:06.547 [WS-CLEANUP] Deleting project workspace... 00:00:06.547 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.552 [WS-CLEANUP] done 00:00:06.724 [Pipeline] setCustomBuildProperty 00:00:06.809 [Pipeline] httpRequest 00:00:06.832 [Pipeline] echo 00:00:06.833 Sorcerer 10.211.164.101 is alive 00:00:06.839 [Pipeline] httpRequest 00:00:06.842 HttpMethod: GET 00:00:06.842 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.842 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.861 Response Code: HTTP/1.1 200 OK 00:00:06.861 Success: Status code 200 is in the accepted range: 200,404 00:00:06.862 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.309 [Pipeline] sh 00:00:14.591 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.609 [Pipeline] httpRequest 00:00:14.642 [Pipeline] echo 00:00:14.644 Sorcerer 10.211.164.101 is alive 00:00:14.654 [Pipeline] httpRequest 00:00:14.659 HttpMethod: GET 00:00:14.660 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.661 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.678 Response Code: HTTP/1.1 200 OK 00:00:14.678 Success: Status code 200 is in the accepted range: 200,404 00:00:14.679 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:07.415 [Pipeline] sh 00:01:07.690 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:10.232 [Pipeline] sh 00:01:10.509 + git -C spdk log --oneline -n5 00:01:10.509 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:10.509 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:10.509 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:10.509 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:10.509 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:10.527 [Pipeline] withCredentials 00:01:10.536 > git --version # timeout=10 00:01:10.547 > git --version # 'git version 2.39.2' 00:01:10.560 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:10.562 [Pipeline] { 00:01:10.571 [Pipeline] retry 00:01:10.573 [Pipeline] { 00:01:10.589 [Pipeline] sh 00:01:10.866 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:12.251 [Pipeline] } 00:01:12.274 [Pipeline] // retry 00:01:12.282 [Pipeline] } 00:01:12.304 [Pipeline] // withCredentials 00:01:12.316 [Pipeline] httpRequest 00:01:12.338 [Pipeline] echo 00:01:12.340 Sorcerer 10.211.164.101 is alive 00:01:12.350 [Pipeline] httpRequest 00:01:12.356 HttpMethod: GET 00:01:12.356 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.357 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.357 Response Code: HTTP/1.1 200 OK 00:01:12.358 Success: Status code 200 is in the accepted range: 200,404 00:01:12.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.984 [Pipeline] sh 00:01:18.268 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.651 [Pipeline] sh 00:01:19.991 + git -C dpdk log --oneline -n5 00:01:19.991 eeb0605f11 version: 23.11.0 00:01:19.991 238778122a doc: update release notes for 23.11 00:01:19.991 46aa6b3cfc doc: fix description of RSS features 00:01:19.991 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:19.991 7e421ae345 devtools: support skipping forbid rule check 00:01:20.009 [Pipeline] writeFile 00:01:20.026 [Pipeline] sh 00:01:20.304 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.316 [Pipeline] sh 00:01:20.595 + cat autorun-spdk.conf 00:01:20.595 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.595 SPDK_TEST_NVMF=1 00:01:20.595 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.595 SPDK_TEST_USDT=1 00:01:20.595 SPDK_RUN_UBSAN=1 00:01:20.595 SPDK_TEST_NVMF_MDNS=1 00:01:20.595 NET_TYPE=virt 00:01:20.595 SPDK_JSONRPC_GO_CLIENT=1 00:01:20.595 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:20.595 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.595 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.601 RUN_NIGHTLY=1 00:01:20.603 [Pipeline] } 00:01:20.619 [Pipeline] // stage 00:01:20.634 [Pipeline] stage 00:01:20.636 [Pipeline] { (Run VM) 00:01:20.650 [Pipeline] sh 00:01:20.928 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.928 + echo 'Start stage prepare_nvme.sh' 00:01:20.928 Start stage prepare_nvme.sh 00:01:20.928 + [[ -n 7 ]] 00:01:20.928 + disk_prefix=ex7 00:01:20.928 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:20.928 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:20.928 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:20.928 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.928 ++ SPDK_TEST_NVMF=1 00:01:20.928 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.928 ++ SPDK_TEST_USDT=1 00:01:20.928 ++ SPDK_RUN_UBSAN=1 00:01:20.928 ++ SPDK_TEST_NVMF_MDNS=1 00:01:20.928 ++ NET_TYPE=virt 00:01:20.928 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:20.928 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:20.928 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.928 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.928 ++ RUN_NIGHTLY=1 00:01:20.928 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:20.928 + nvme_files=() 00:01:20.929 + declare -A nvme_files 00:01:20.929 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.929 + nvme_files['nvme.img']=5G 00:01:20.929 + nvme_files['nvme-cmb.img']=5G 00:01:20.929 + nvme_files['nvme-multi0.img']=4G 00:01:20.929 + nvme_files['nvme-multi1.img']=4G 00:01:20.929 + nvme_files['nvme-multi2.img']=4G 00:01:20.929 + nvme_files['nvme-openstack.img']=8G 00:01:20.929 + nvme_files['nvme-zns.img']=5G 00:01:20.929 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.929 + (( SPDK_TEST_FTL == 1 )) 00:01:20.929 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.929 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.929 + for nvme in "${!nvme_files[@]}" 00:01:20.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:20.929 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.929 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:21.186 + echo 'End stage prepare_nvme.sh' 00:01:21.186 End stage prepare_nvme.sh 00:01:21.197 [Pipeline] sh 00:01:21.477 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.477 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:21.477 00:01:21.477 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:21.477 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:21.477 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.477 HELP=0 00:01:21.477 DRY_RUN=0 00:01:21.477 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:21.477 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.477 NVME_AUTO_CREATE=0 00:01:21.477 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:21.477 NVME_CMB=,, 00:01:21.477 NVME_PMR=,, 00:01:21.477 NVME_ZNS=,, 00:01:21.477 NVME_MS=,, 00:01:21.477 NVME_FDP=,, 00:01:21.477 SPDK_VAGRANT_DISTRO=fedora38 00:01:21.477 SPDK_VAGRANT_VMCPU=10 00:01:21.477 SPDK_VAGRANT_VMRAM=12288 00:01:21.477 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.477 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.477 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.477 SPDK_OPENSTACK_NETWORK=0 00:01:21.477 VAGRANT_PACKAGE_BOX=0 00:01:21.477 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.477 FORCE_DISTRO=true 00:01:21.477 VAGRANT_BOX_VERSION= 00:01:21.477 EXTRA_VAGRANTFILES= 00:01:21.477 NIC_MODEL=e1000 00:01:21.477 00:01:21.477 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:21.477 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.764 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.022 ==> default: Creating image (snapshot of base box volume). 00:01:25.281 ==> default: Creating domain with the following settings... 00:01:25.281 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720981052_85376ff6e200223ea863 00:01:25.281 ==> default: -- Domain type: kvm 00:01:25.281 ==> default: -- Cpus: 10 00:01:25.281 ==> default: -- Feature: acpi 00:01:25.281 ==> default: -- Feature: apic 00:01:25.281 ==> default: -- Feature: pae 00:01:25.281 ==> default: -- Memory: 12288M 00:01:25.281 ==> default: -- Memory Backing: hugepages: 00:01:25.281 ==> default: -- Management MAC: 00:01:25.281 ==> default: -- Loader: 00:01:25.281 ==> default: -- Nvram: 00:01:25.281 ==> default: -- Base box: spdk/fedora38 00:01:25.281 ==> default: -- Storage pool: default 00:01:25.281 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720981052_85376ff6e200223ea863.img (20G) 00:01:25.281 ==> default: -- Volume Cache: default 00:01:25.281 ==> default: -- Kernel: 00:01:25.281 ==> default: -- Initrd: 00:01:25.281 ==> default: -- Graphics Type: vnc 00:01:25.281 ==> default: -- Graphics Port: -1 00:01:25.281 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.281 ==> default: -- Graphics Password: Not defined 00:01:25.281 ==> default: -- Video Type: cirrus 00:01:25.281 ==> default: -- Video VRAM: 9216 00:01:25.281 ==> default: -- Sound Type: 00:01:25.281 ==> default: -- Keymap: en-us 00:01:25.281 ==> default: -- TPM Path: 00:01:25.281 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.281 ==> default: -- Command line args: 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:25.281 ==> default: -> value=-drive, 00:01:25.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:25.281 ==> default: -> value=-drive, 00:01:25.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.281 ==> default: -> value=-drive, 00:01:25.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.281 ==> default: -> value=-drive, 00:01:25.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.281 ==> default: -> value=-device, 00:01:25.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.281 ==> default: Creating shared folders metadata... 00:01:25.281 ==> default: Starting domain. 00:01:27.209 ==> default: Waiting for domain to get an IP address... 00:01:45.287 ==> default: Waiting for SSH to become available... 00:01:45.287 ==> default: Configuring and enabling network interfaces... 00:01:49.526 default: SSH address: 192.168.121.38:22 00:01:49.526 default: SSH username: vagrant 00:01:49.526 default: SSH auth method: private key 00:01:51.430 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.146 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:04.699 ==> default: Mounting SSHFS shared folder... 00:02:05.645 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:05.645 ==> default: Checking Mount.. 00:02:07.016 ==> default: Folder Successfully Mounted! 00:02:07.016 ==> default: Running provisioner: file... 00:02:07.949 default: ~/.gitconfig => .gitconfig 00:02:08.207 00:02:08.207 SUCCESS! 00:02:08.207 00:02:08.207 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:08.207 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:08.207 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:08.207 00:02:08.216 [Pipeline] } 00:02:08.234 [Pipeline] // stage 00:02:08.244 [Pipeline] dir 00:02:08.244 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:08.246 [Pipeline] { 00:02:08.260 [Pipeline] catchError 00:02:08.262 [Pipeline] { 00:02:08.274 [Pipeline] sh 00:02:08.549 + vagrant ssh-config --host vagrant 00:02:08.549 + sed -ne /^Host/,$p 00:02:08.549 + tee ssh_conf 00:02:12.762 Host vagrant 00:02:12.762 HostName 192.168.121.38 00:02:12.762 User vagrant 00:02:12.762 Port 22 00:02:12.762 UserKnownHostsFile /dev/null 00:02:12.762 StrictHostKeyChecking no 00:02:12.762 PasswordAuthentication no 00:02:12.762 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:12.762 IdentitiesOnly yes 00:02:12.762 LogLevel FATAL 00:02:12.762 ForwardAgent yes 00:02:12.762 ForwardX11 yes 00:02:12.762 00:02:12.775 [Pipeline] withEnv 00:02:12.778 [Pipeline] { 00:02:12.793 [Pipeline] sh 00:02:13.070 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:13.070 source /etc/os-release 00:02:13.070 [[ -e /image.version ]] && img=$(< /image.version) 00:02:13.070 # Minimal, systemd-like check. 00:02:13.070 if [[ -e /.dockerenv ]]; then 00:02:13.070 # Clear garbage from the node's name: 00:02:13.070 # agt-er_autotest_547-896 -> autotest_547-896 00:02:13.070 # $HOSTNAME is the actual container id 00:02:13.070 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:13.070 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:13.070 # We can assume this is a mount from a host where container is running, 00:02:13.070 # so fetch its hostname to easily identify the target swarm worker. 00:02:13.070 container="$(< /etc/hostname) ($agent)" 00:02:13.070 else 00:02:13.070 # Fallback 00:02:13.070 container=$agent 00:02:13.070 fi 00:02:13.070 fi 00:02:13.070 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:13.070 00:02:13.339 [Pipeline] } 00:02:13.359 [Pipeline] // withEnv 00:02:13.368 [Pipeline] setCustomBuildProperty 00:02:13.386 [Pipeline] stage 00:02:13.389 [Pipeline] { (Tests) 00:02:13.413 [Pipeline] sh 00:02:13.693 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:13.709 [Pipeline] sh 00:02:13.989 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:14.008 [Pipeline] timeout 00:02:14.009 Timeout set to expire in 40 min 00:02:14.011 [Pipeline] { 00:02:14.029 [Pipeline] sh 00:02:14.309 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:14.906 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:14.920 [Pipeline] sh 00:02:15.300 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:15.572 [Pipeline] sh 00:02:15.851 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:16.123 [Pipeline] sh 00:02:16.397 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:16.397 ++ readlink -f spdk_repo 00:02:16.397 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:16.397 + [[ -n /home/vagrant/spdk_repo ]] 00:02:16.397 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:16.397 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:16.397 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:16.397 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:16.397 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:16.397 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:16.397 + cd /home/vagrant/spdk_repo 00:02:16.397 + source /etc/os-release 00:02:16.397 ++ NAME='Fedora Linux' 00:02:16.397 ++ VERSION='38 (Cloud Edition)' 00:02:16.397 ++ ID=fedora 00:02:16.397 ++ VERSION_ID=38 00:02:16.397 ++ VERSION_CODENAME= 00:02:16.397 ++ PLATFORM_ID=platform:f38 00:02:16.397 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:16.397 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:16.397 ++ LOGO=fedora-logo-icon 00:02:16.397 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:16.397 ++ HOME_URL=https://fedoraproject.org/ 00:02:16.397 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:16.397 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:16.397 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:16.397 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:16.397 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:16.397 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:16.397 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:16.397 ++ SUPPORT_END=2024-05-14 00:02:16.397 ++ VARIANT='Cloud Edition' 00:02:16.397 ++ VARIANT_ID=cloud 00:02:16.397 + uname -a 00:02:16.655 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:16.655 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:16.655 Hugepages 00:02:16.655 node hugesize free / total 00:02:16.655 node0 1048576kB 0 / 0 00:02:16.655 node0 2048kB 0 / 0 00:02:16.655 00:02:16.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.655 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:16.655 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:16.655 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:16.655 + rm -f /tmp/spdk-ld-path 00:02:16.655 + source autorun-spdk.conf 00:02:16.655 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.655 ++ SPDK_TEST_NVMF=1 00:02:16.655 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.655 ++ SPDK_TEST_USDT=1 00:02:16.655 ++ SPDK_RUN_UBSAN=1 00:02:16.655 ++ SPDK_TEST_NVMF_MDNS=1 00:02:16.655 ++ NET_TYPE=virt 00:02:16.655 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:16.655 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:16.655 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:16.655 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:16.655 ++ RUN_NIGHTLY=1 00:02:16.655 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.655 + [[ -n '' ]] 00:02:16.655 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:16.912 + for M in /var/spdk/build-*-manifest.txt 00:02:16.912 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.912 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:16.912 + for M in /var/spdk/build-*-manifest.txt 00:02:16.912 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.912 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:16.912 ++ uname 00:02:16.912 + [[ Linux == \L\i\n\u\x ]] 00:02:16.912 + sudo dmesg -T 00:02:16.912 + sudo dmesg --clear 00:02:16.912 + dmesg_pid=5977 00:02:16.912 + [[ Fedora Linux == FreeBSD ]] 00:02:16.912 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.912 + sudo dmesg -Tw 00:02:16.912 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.912 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.912 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.912 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.912 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.912 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.912 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.912 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.912 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.912 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.912 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.912 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.912 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.912 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.912 Test configuration: 00:02:16.912 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.912 SPDK_TEST_NVMF=1 00:02:16.912 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.912 SPDK_TEST_USDT=1 00:02:16.912 SPDK_RUN_UBSAN=1 00:02:16.912 SPDK_TEST_NVMF_MDNS=1 00:02:16.912 NET_TYPE=virt 00:02:16.912 SPDK_JSONRPC_GO_CLIENT=1 00:02:16.912 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:16.912 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:16.912 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:16.912 RUN_NIGHTLY=1 18:18:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:16.912 18:18:24 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.912 18:18:24 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.912 18:18:24 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.912 18:18:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.912 18:18:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.912 18:18:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.912 18:18:24 -- paths/export.sh@5 -- $ export PATH 00:02:16.912 18:18:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.912 18:18:24 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:16.912 18:18:24 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:16.912 18:18:24 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720981104.XXXXXX 00:02:16.912 18:18:24 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720981104.bM8J61 00:02:16.912 18:18:24 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:16.913 18:18:24 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:16.913 18:18:24 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:16.913 18:18:24 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:16.913 18:18:24 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:16.913 18:18:24 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.913 18:18:24 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:16.913 18:18:24 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:16.913 18:18:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.913 18:18:24 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:16.913 18:18:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.913 18:18:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.913 18:18:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.913 18:18:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.913 Sun Jul 14 06:18:24 PM UTC 2024 00:02:16.913 18:18:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.913 LTS-59-g4b94202c6 00:02:16.913 18:18:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.913 18:18:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.913 18:18:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.913 18:18:24 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:16.913 18:18:24 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:16.913 18:18:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.913 ************************************ 00:02:16.913 START TEST ubsan 00:02:16.913 ************************************ 00:02:16.913 18:18:24 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:16.913 using ubsan 00:02:16.913 00:02:16.913 real 0m0.000s 00:02:16.913 user 0m0.000s 00:02:16.913 sys 0m0.000s 00:02:16.913 18:18:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:16.913 18:18:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.913 ************************************ 00:02:16.913 END TEST ubsan 00:02:16.913 ************************************ 00:02:17.171 18:18:24 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:17.171 18:18:24 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:17.171 18:18:24 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:17.171 18:18:24 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:17.171 18:18:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.171 ************************************ 00:02:17.171 START TEST build_native_dpdk 00:02:17.171 ************************************ 00:02:17.171 18:18:24 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:17.171 18:18:24 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:17.171 18:18:24 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:17.171 18:18:24 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:17.171 18:18:24 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:17.171 18:18:24 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:17.171 18:18:24 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:17.171 18:18:24 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:17.171 18:18:24 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:17.171 18:18:24 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:17.171 18:18:24 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:17.171 18:18:24 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:17.171 18:18:24 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:17.171 18:18:24 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:17.171 eeb0605f11 version: 23.11.0 00:02:17.171 238778122a doc: update release notes for 23.11 00:02:17.171 46aa6b3cfc doc: fix description of RSS features 00:02:17.171 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:17.171 7e421ae345 devtools: support skipping forbid rule check 00:02:17.171 18:18:24 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:17.171 18:18:24 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:17.171 18:18:24 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:17.171 18:18:24 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:17.171 18:18:24 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:17.171 18:18:24 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:17.171 18:18:24 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:17.171 18:18:24 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:17.171 18:18:24 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:17.171 18:18:24 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:17.171 18:18:24 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:17.171 18:18:24 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:17.171 18:18:24 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:17.171 18:18:24 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:17.171 18:18:24 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:17.171 18:18:24 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:17.171 18:18:24 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:17.171 18:18:24 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:17.171 18:18:24 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:17.171 18:18:24 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:17.171 18:18:24 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:17.171 18:18:24 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:17.171 18:18:24 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:17.171 18:18:24 -- scripts/common.sh@343 -- $ case "$op" in 00:02:17.171 18:18:24 -- scripts/common.sh@344 -- $ : 1 00:02:17.171 18:18:24 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:17.171 18:18:24 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.171 18:18:24 -- scripts/common.sh@364 -- $ decimal 23 00:02:17.171 18:18:24 -- scripts/common.sh@352 -- $ local d=23 00:02:17.171 18:18:24 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:17.171 18:18:24 -- scripts/common.sh@354 -- $ echo 23 00:02:17.171 18:18:24 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:17.171 18:18:24 -- scripts/common.sh@365 -- $ decimal 21 00:02:17.171 18:18:24 -- scripts/common.sh@352 -- $ local d=21 00:02:17.171 18:18:24 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:17.171 18:18:24 -- scripts/common.sh@354 -- $ echo 21 00:02:17.171 18:18:24 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:17.171 18:18:24 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:17.171 18:18:24 -- scripts/common.sh@366 -- $ return 1 00:02:17.171 18:18:24 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:17.171 patching file config/rte_config.h 00:02:17.171 Hunk #1 succeeded at 60 (offset 1 line). 00:02:17.171 18:18:24 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:17.171 18:18:24 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:17.171 18:18:24 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:17.171 18:18:24 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:17.171 18:18:24 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:22.437 The Meson build system 00:02:22.437 Version: 1.3.1 00:02:22.437 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:22.437 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:22.437 Build type: native build 00:02:22.437 Program cat found: YES (/usr/bin/cat) 00:02:22.437 Project name: DPDK 00:02:22.437 Project version: 23.11.0 00:02:22.437 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:22.437 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:22.437 Host machine cpu family: x86_64 00:02:22.437 Host machine cpu: x86_64 00:02:22.437 Message: ## Building in Developer Mode ## 00:02:22.437 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.437 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:22.437 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.437 Program python3 found: YES (/usr/bin/python3) 00:02:22.437 Program cat found: YES (/usr/bin/cat) 00:02:22.437 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:22.437 Compiler for C supports arguments -march=native: YES 00:02:22.437 Checking for size of "void *" : 8 00:02:22.437 Checking for size of "void *" : 8 (cached) 00:02:22.437 Library m found: YES 00:02:22.437 Library numa found: YES 00:02:22.437 Has header "numaif.h" : YES 00:02:22.437 Library fdt found: NO 00:02:22.437 Library execinfo found: NO 00:02:22.437 Has header "execinfo.h" : YES 00:02:22.437 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:22.437 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.437 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.437 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.437 Run-time dependency openssl found: YES 3.0.9 00:02:22.437 Run-time dependency libpcap found: YES 1.10.4 00:02:22.437 Has header "pcap.h" with dependency libpcap: YES 00:02:22.437 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.437 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.437 Compiler for C supports arguments -Wformat: YES 00:02:22.437 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.437 Compiler for C supports arguments -Wformat-security: NO 00:02:22.437 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.437 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.437 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.437 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.437 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.437 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.437 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.437 Compiler for C supports arguments -Wundef: YES 00:02:22.437 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.437 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.437 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.437 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.437 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.437 Program objdump found: YES (/usr/bin/objdump) 00:02:22.437 Compiler for C supports arguments -mavx512f: YES 00:02:22.437 Checking if "AVX512 checking" compiles: YES 00:02:22.437 Fetching value of define "__SSE4_2__" : 1 00:02:22.437 Fetching value of define "__AES__" : 1 00:02:22.437 Fetching value of define "__AVX__" : 1 00:02:22.437 Fetching value of define "__AVX2__" : 1 00:02:22.437 Fetching value of define "__AVX512BW__" : (undefined) 00:02:22.437 Fetching value of define "__AVX512CD__" : (undefined) 00:02:22.437 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:22.437 Fetching value of define "__AVX512F__" : (undefined) 00:02:22.437 Fetching value of define "__AVX512VL__" : (undefined) 00:02:22.437 Fetching value of define "__PCLMUL__" : 1 00:02:22.437 Fetching value of define "__RDRND__" : 1 00:02:22.437 Fetching value of define "__RDSEED__" : 1 00:02:22.437 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.437 Fetching value of define "__znver1__" : (undefined) 00:02:22.437 Fetching value of define "__znver2__" : (undefined) 00:02:22.437 Fetching value of define "__znver3__" : (undefined) 00:02:22.437 Fetching value of define "__znver4__" : (undefined) 00:02:22.437 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.437 Message: lib/log: Defining dependency "log" 00:02:22.437 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.437 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.437 Checking for function "getentropy" : NO 00:02:22.437 Message: lib/eal: Defining dependency "eal" 00:02:22.437 Message: lib/ring: Defining dependency "ring" 00:02:22.437 Message: lib/rcu: Defining dependency "rcu" 00:02:22.437 Message: lib/mempool: Defining dependency "mempool" 00:02:22.437 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.437 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.437 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.437 Compiler for C supports arguments -mpclmul: YES 00:02:22.437 Compiler for C supports arguments -maes: YES 00:02:22.437 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.437 Compiler for C supports arguments -mavx512bw: YES 00:02:22.437 Compiler for C supports arguments -mavx512dq: YES 00:02:22.437 Compiler for C supports arguments -mavx512vl: YES 00:02:22.437 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.437 Compiler for C supports arguments -mavx2: YES 00:02:22.437 Compiler for C supports arguments -mavx: YES 00:02:22.437 Message: lib/net: Defining dependency "net" 00:02:22.437 Message: lib/meter: Defining dependency "meter" 00:02:22.437 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.437 Message: lib/pci: Defining dependency "pci" 00:02:22.437 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.437 Message: lib/metrics: Defining dependency "metrics" 00:02:22.437 Message: lib/hash: Defining dependency "hash" 00:02:22.437 Message: lib/timer: Defining dependency "timer" 00:02:22.437 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.437 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:22.437 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:22.437 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:22.437 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:22.437 Message: lib/acl: Defining dependency "acl" 00:02:22.437 Message: lib/bbdev: Defining dependency "bbdev" 00:02:22.437 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:22.437 Run-time dependency libelf found: YES 0.190 00:02:22.437 Message: lib/bpf: Defining dependency "bpf" 00:02:22.437 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:22.437 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.437 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.437 Message: lib/distributor: Defining dependency "distributor" 00:02:22.437 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.437 Message: lib/efd: Defining dependency "efd" 00:02:22.437 Message: lib/eventdev: Defining dependency "eventdev" 00:02:22.438 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:22.438 Message: lib/gpudev: Defining dependency "gpudev" 00:02:22.438 Message: lib/gro: Defining dependency "gro" 00:02:22.438 Message: lib/gso: Defining dependency "gso" 00:02:22.438 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:22.438 Message: lib/jobstats: Defining dependency "jobstats" 00:02:22.438 Message: lib/latencystats: Defining dependency "latencystats" 00:02:22.438 Message: lib/lpm: Defining dependency "lpm" 00:02:22.438 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.438 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:22.438 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:22.438 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:22.438 Message: lib/member: Defining dependency "member" 00:02:22.438 Message: lib/pcapng: Defining dependency "pcapng" 00:02:22.438 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.438 Message: lib/power: Defining dependency "power" 00:02:22.438 Message: lib/rawdev: Defining dependency "rawdev" 00:02:22.438 Message: lib/regexdev: Defining dependency "regexdev" 00:02:22.438 Message: lib/mldev: Defining dependency "mldev" 00:02:22.438 Message: lib/rib: Defining dependency "rib" 00:02:22.438 Message: lib/reorder: Defining dependency "reorder" 00:02:22.438 Message: lib/sched: Defining dependency "sched" 00:02:22.438 Message: lib/security: Defining dependency "security" 00:02:22.438 Message: lib/stack: Defining dependency "stack" 00:02:22.438 Has header "linux/userfaultfd.h" : YES 00:02:22.438 Has header "linux/vduse.h" : YES 00:02:22.438 Message: lib/vhost: Defining dependency "vhost" 00:02:22.438 Message: lib/ipsec: Defining dependency "ipsec" 00:02:22.438 Message: lib/pdcp: Defining dependency "pdcp" 00:02:22.438 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.438 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:22.438 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:22.438 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:22.438 Message: lib/fib: Defining dependency "fib" 00:02:22.438 Message: lib/port: Defining dependency "port" 00:02:22.438 Message: lib/pdump: Defining dependency "pdump" 00:02:22.438 Message: lib/table: Defining dependency "table" 00:02:22.438 Message: lib/pipeline: Defining dependency "pipeline" 00:02:22.438 Message: lib/graph: Defining dependency "graph" 00:02:22.438 Message: lib/node: Defining dependency "node" 00:02:22.438 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.810 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.810 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.810 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.810 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:23.810 Compiler for C supports arguments -Wno-unused-value: YES 00:02:23.810 Compiler for C supports arguments -Wno-format: YES 00:02:23.810 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.810 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.810 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.810 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.810 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:23.810 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:23.810 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.810 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:23.810 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:23.810 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:23.810 Has header "sys/epoll.h" : YES 00:02:23.810 Program doxygen found: YES (/usr/bin/doxygen) 00:02:23.810 Configuring doxy-api-html.conf using configuration 00:02:23.810 Configuring doxy-api-man.conf using configuration 00:02:23.810 Program mandb found: YES (/usr/bin/mandb) 00:02:23.810 Program sphinx-build found: NO 00:02:23.810 Configuring rte_build_config.h using configuration 00:02:23.810 Message: 00:02:23.810 ================= 00:02:23.810 Applications Enabled 00:02:23.810 ================= 00:02:23.810 00:02:23.810 apps: 00:02:23.810 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:23.810 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:23.810 test-pmd, test-regex, test-sad, test-security-perf, 00:02:23.810 00:02:23.810 Message: 00:02:23.810 ================= 00:02:23.810 Libraries Enabled 00:02:23.810 ================= 00:02:23.810 00:02:23.810 libs: 00:02:23.810 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.810 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:23.810 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:23.810 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:23.810 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:23.810 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:23.810 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:23.810 00:02:23.810 00:02:23.810 Message: 00:02:23.810 =============== 00:02:23.810 Drivers Enabled 00:02:23.810 =============== 00:02:23.810 00:02:23.810 common: 00:02:23.810 00:02:23.810 bus: 00:02:23.810 pci, vdev, 00:02:23.810 mempool: 00:02:23.810 ring, 00:02:23.810 dma: 00:02:23.810 00:02:23.810 net: 00:02:23.810 i40e, 00:02:23.810 raw: 00:02:23.811 00:02:23.811 crypto: 00:02:23.811 00:02:23.811 compress: 00:02:23.811 00:02:23.811 regex: 00:02:23.811 00:02:23.811 ml: 00:02:23.811 00:02:23.811 vdpa: 00:02:23.811 00:02:23.811 event: 00:02:23.811 00:02:23.811 baseband: 00:02:23.811 00:02:23.811 gpu: 00:02:23.811 00:02:23.811 00:02:23.811 Message: 00:02:23.811 ================= 00:02:23.811 Content Skipped 00:02:23.811 ================= 00:02:23.811 00:02:23.811 apps: 00:02:23.811 00:02:23.811 libs: 00:02:23.811 00:02:23.811 drivers: 00:02:23.811 common/cpt: not in enabled drivers build config 00:02:23.811 common/dpaax: not in enabled drivers build config 00:02:23.811 common/iavf: not in enabled drivers build config 00:02:23.811 common/idpf: not in enabled drivers build config 00:02:23.811 common/mvep: not in enabled drivers build config 00:02:23.811 common/octeontx: not in enabled drivers build config 00:02:23.811 bus/auxiliary: not in enabled drivers build config 00:02:23.811 bus/cdx: not in enabled drivers build config 00:02:23.811 bus/dpaa: not in enabled drivers build config 00:02:23.811 bus/fslmc: not in enabled drivers build config 00:02:23.811 bus/ifpga: not in enabled drivers build config 00:02:23.811 bus/platform: not in enabled drivers build config 00:02:23.811 bus/vmbus: not in enabled drivers build config 00:02:23.811 common/cnxk: not in enabled drivers build config 00:02:23.811 common/mlx5: not in enabled drivers build config 00:02:23.811 common/nfp: not in enabled drivers build config 00:02:23.811 common/qat: not in enabled drivers build config 00:02:23.811 common/sfc_efx: not in enabled drivers build config 00:02:23.811 mempool/bucket: not in enabled drivers build config 00:02:23.811 mempool/cnxk: not in enabled drivers build config 00:02:23.811 mempool/dpaa: not in enabled drivers build config 00:02:23.811 mempool/dpaa2: not in enabled drivers build config 00:02:23.811 mempool/octeontx: not in enabled drivers build config 00:02:23.811 mempool/stack: not in enabled drivers build config 00:02:23.811 dma/cnxk: not in enabled drivers build config 00:02:23.811 dma/dpaa: not in enabled drivers build config 00:02:23.811 dma/dpaa2: not in enabled drivers build config 00:02:23.811 dma/hisilicon: not in enabled drivers build config 00:02:23.811 dma/idxd: not in enabled drivers build config 00:02:23.811 dma/ioat: not in enabled drivers build config 00:02:23.811 dma/skeleton: not in enabled drivers build config 00:02:23.811 net/af_packet: not in enabled drivers build config 00:02:23.811 net/af_xdp: not in enabled drivers build config 00:02:23.811 net/ark: not in enabled drivers build config 00:02:23.811 net/atlantic: not in enabled drivers build config 00:02:23.811 net/avp: not in enabled drivers build config 00:02:23.811 net/axgbe: not in enabled drivers build config 00:02:23.811 net/bnx2x: not in enabled drivers build config 00:02:23.811 net/bnxt: not in enabled drivers build config 00:02:23.811 net/bonding: not in enabled drivers build config 00:02:23.811 net/cnxk: not in enabled drivers build config 00:02:23.811 net/cpfl: not in enabled drivers build config 00:02:23.811 net/cxgbe: not in enabled drivers build config 00:02:23.811 net/dpaa: not in enabled drivers build config 00:02:23.811 net/dpaa2: not in enabled drivers build config 00:02:23.811 net/e1000: not in enabled drivers build config 00:02:23.811 net/ena: not in enabled drivers build config 00:02:23.811 net/enetc: not in enabled drivers build config 00:02:23.811 net/enetfec: not in enabled drivers build config 00:02:23.811 net/enic: not in enabled drivers build config 00:02:23.811 net/failsafe: not in enabled drivers build config 00:02:23.811 net/fm10k: not in enabled drivers build config 00:02:23.811 net/gve: not in enabled drivers build config 00:02:23.811 net/hinic: not in enabled drivers build config 00:02:23.811 net/hns3: not in enabled drivers build config 00:02:23.811 net/iavf: not in enabled drivers build config 00:02:23.811 net/ice: not in enabled drivers build config 00:02:23.811 net/idpf: not in enabled drivers build config 00:02:23.811 net/igc: not in enabled drivers build config 00:02:23.811 net/ionic: not in enabled drivers build config 00:02:23.811 net/ipn3ke: not in enabled drivers build config 00:02:23.811 net/ixgbe: not in enabled drivers build config 00:02:23.811 net/mana: not in enabled drivers build config 00:02:23.811 net/memif: not in enabled drivers build config 00:02:23.811 net/mlx4: not in enabled drivers build config 00:02:23.811 net/mlx5: not in enabled drivers build config 00:02:23.811 net/mvneta: not in enabled drivers build config 00:02:23.811 net/mvpp2: not in enabled drivers build config 00:02:23.811 net/netvsc: not in enabled drivers build config 00:02:23.811 net/nfb: not in enabled drivers build config 00:02:23.811 net/nfp: not in enabled drivers build config 00:02:23.811 net/ngbe: not in enabled drivers build config 00:02:23.811 net/null: not in enabled drivers build config 00:02:23.811 net/octeontx: not in enabled drivers build config 00:02:23.811 net/octeon_ep: not in enabled drivers build config 00:02:23.811 net/pcap: not in enabled drivers build config 00:02:23.811 net/pfe: not in enabled drivers build config 00:02:23.811 net/qede: not in enabled drivers build config 00:02:23.811 net/ring: not in enabled drivers build config 00:02:23.811 net/sfc: not in enabled drivers build config 00:02:23.811 net/softnic: not in enabled drivers build config 00:02:23.811 net/tap: not in enabled drivers build config 00:02:23.811 net/thunderx: not in enabled drivers build config 00:02:23.811 net/txgbe: not in enabled drivers build config 00:02:23.811 net/vdev_netvsc: not in enabled drivers build config 00:02:23.811 net/vhost: not in enabled drivers build config 00:02:23.811 net/virtio: not in enabled drivers build config 00:02:23.811 net/vmxnet3: not in enabled drivers build config 00:02:23.811 raw/cnxk_bphy: not in enabled drivers build config 00:02:23.811 raw/cnxk_gpio: not in enabled drivers build config 00:02:23.811 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:23.811 raw/ifpga: not in enabled drivers build config 00:02:23.811 raw/ntb: not in enabled drivers build config 00:02:23.811 raw/skeleton: not in enabled drivers build config 00:02:23.811 crypto/armv8: not in enabled drivers build config 00:02:23.811 crypto/bcmfs: not in enabled drivers build config 00:02:23.811 crypto/caam_jr: not in enabled drivers build config 00:02:23.811 crypto/ccp: not in enabled drivers build config 00:02:23.811 crypto/cnxk: not in enabled drivers build config 00:02:23.811 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.811 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.811 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.811 crypto/mlx5: not in enabled drivers build config 00:02:23.811 crypto/mvsam: not in enabled drivers build config 00:02:23.811 crypto/nitrox: not in enabled drivers build config 00:02:23.811 crypto/null: not in enabled drivers build config 00:02:23.811 crypto/octeontx: not in enabled drivers build config 00:02:23.811 crypto/openssl: not in enabled drivers build config 00:02:23.811 crypto/scheduler: not in enabled drivers build config 00:02:23.811 crypto/uadk: not in enabled drivers build config 00:02:23.811 crypto/virtio: not in enabled drivers build config 00:02:23.811 compress/isal: not in enabled drivers build config 00:02:23.811 compress/mlx5: not in enabled drivers build config 00:02:23.811 compress/octeontx: not in enabled drivers build config 00:02:23.811 compress/zlib: not in enabled drivers build config 00:02:23.811 regex/mlx5: not in enabled drivers build config 00:02:23.811 regex/cn9k: not in enabled drivers build config 00:02:23.811 ml/cnxk: not in enabled drivers build config 00:02:23.811 vdpa/ifc: not in enabled drivers build config 00:02:23.811 vdpa/mlx5: not in enabled drivers build config 00:02:23.811 vdpa/nfp: not in enabled drivers build config 00:02:23.811 vdpa/sfc: not in enabled drivers build config 00:02:23.811 event/cnxk: not in enabled drivers build config 00:02:23.811 event/dlb2: not in enabled drivers build config 00:02:23.811 event/dpaa: not in enabled drivers build config 00:02:23.811 event/dpaa2: not in enabled drivers build config 00:02:23.811 event/dsw: not in enabled drivers build config 00:02:23.811 event/opdl: not in enabled drivers build config 00:02:23.811 event/skeleton: not in enabled drivers build config 00:02:23.811 event/sw: not in enabled drivers build config 00:02:23.811 event/octeontx: not in enabled drivers build config 00:02:23.811 baseband/acc: not in enabled drivers build config 00:02:23.811 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:23.811 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:23.811 baseband/la12xx: not in enabled drivers build config 00:02:23.811 baseband/null: not in enabled drivers build config 00:02:23.811 baseband/turbo_sw: not in enabled drivers build config 00:02:23.811 gpu/cuda: not in enabled drivers build config 00:02:23.811 00:02:23.811 00:02:23.811 Build targets in project: 220 00:02:23.811 00:02:23.811 DPDK 23.11.0 00:02:23.811 00:02:23.811 User defined options 00:02:23.811 libdir : lib 00:02:23.811 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:23.811 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:23.811 c_link_args : 00:02:23.811 enable_docs : false 00:02:23.811 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:23.811 enable_kmods : false 00:02:23.811 machine : native 00:02:23.811 tests : false 00:02:23.811 00:02:23.811 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.811 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:24.069 18:18:31 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:24.069 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:24.069 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.069 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.069 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.069 [4/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.327 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.327 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.327 [7/710] Linking static target lib/librte_kvargs.a 00:02:24.327 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.327 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.327 [10/710] Linking static target lib/librte_log.a 00:02:24.584 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.584 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.584 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.584 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.842 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [16/710] Linking target lib/librte_log.so.24.0 00:02:24.842 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.842 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.842 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.099 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.099 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.099 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:25.099 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.099 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:25.359 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.359 [26/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:25.359 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.359 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.359 [29/710] Linking static target lib/librte_telemetry.a 00:02:25.359 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.359 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.635 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.635 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.918 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.918 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:25.918 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.918 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.918 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.918 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.918 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.918 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:25.918 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.918 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.918 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.179 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.179 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.437 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.437 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.437 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.437 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.695 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.695 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.695 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.695 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.695 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.953 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.953 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.953 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.953 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.953 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.953 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.212 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.212 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.212 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.212 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.212 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.212 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.469 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.469 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.728 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.728 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.728 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.728 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.728 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.728 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.728 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.728 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.987 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.987 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.245 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.245 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.245 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.245 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.245 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.245 [85/710] Linking static target lib/librte_ring.a 00:02:28.504 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.504 [87/710] Linking static target lib/librte_eal.a 00:02:28.504 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.504 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.762 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.762 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.762 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.762 [93/710] Linking static target lib/librte_mempool.a 00:02:28.762 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.021 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.021 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.021 [97/710] Linking static target lib/librte_rcu.a 00:02:29.279 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.279 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.279 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.279 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.537 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.537 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.537 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.537 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.537 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.794 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.794 [108/710] Linking static target lib/librte_mbuf.a 00:02:29.794 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.794 [110/710] Linking static target lib/librte_net.a 00:02:30.052 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.052 [112/710] Linking static target lib/librte_meter.a 00:02:30.052 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.052 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.052 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.052 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.309 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.309 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.309 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.877 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.877 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.134 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.134 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.391 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.391 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:31.391 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.391 [127/710] Linking static target lib/librte_pci.a 00:02:31.391 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.391 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.391 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.649 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.649 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.649 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.649 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.649 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.649 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.649 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.649 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.649 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.649 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.907 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.907 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.165 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.165 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.165 [145/710] Linking static target lib/librte_cmdline.a 00:02:32.423 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:32.423 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.423 [148/710] Linking static target lib/librte_metrics.a 00:02:32.423 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.423 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.681 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.939 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.939 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.939 [154/710] Linking static target lib/librte_timer.a 00:02:32.939 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.196 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.454 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.712 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:33.712 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:33.969 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:34.536 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:34.536 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:34.536 [163/710] Linking static target lib/librte_bitratestats.a 00:02:34.536 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:34.536 [165/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.536 [166/710] Linking static target lib/librte_ethdev.a 00:02:34.536 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.536 [168/710] Linking target lib/librte_eal.so.24.0 00:02:34.536 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.794 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.794 [171/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:34.794 [172/710] Linking static target lib/librte_hash.a 00:02:34.794 [173/710] Linking static target lib/librte_bbdev.a 00:02:34.794 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:34.794 [175/710] Linking target lib/librte_ring.so.24.0 00:02:34.794 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:35.066 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:35.066 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:35.066 [179/710] Linking target lib/librte_mempool.so.24.0 00:02:35.066 [180/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:35.066 [181/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:35.066 [182/710] Linking target lib/librte_meter.so.24.0 00:02:35.066 [183/710] Linking target lib/librte_pci.so.24.0 00:02:35.066 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:35.349 [185/710] Linking target lib/librte_mbuf.so.24.0 00:02:35.349 [186/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:35.349 [187/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:35.349 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:35.349 [189/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.349 [190/710] Linking static target lib/acl/libavx2_tmp.a 00:02:35.349 [191/710] Linking target lib/librte_timer.so.24.0 00:02:35.349 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:35.349 [193/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:35.349 [194/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.349 [195/710] Linking target lib/librte_net.so.24.0 00:02:35.349 [196/710] Linking target lib/librte_bbdev.so.24.0 00:02:35.349 [197/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:35.349 [198/710] Linking static target lib/acl/libavx512_tmp.a 00:02:35.349 [199/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:35.607 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:35.607 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:35.607 [202/710] Linking target lib/librte_hash.so.24.0 00:02:35.607 [203/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:35.607 [204/710] Linking static target lib/librte_acl.a 00:02:35.607 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:35.607 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:35.607 [207/710] Linking static target lib/librte_cfgfile.a 00:02:35.866 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:35.866 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.123 [210/710] Linking target lib/librte_acl.so.24.0 00:02:36.123 [211/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.123 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:36.123 [213/710] Linking target lib/librte_cfgfile.so.24.0 00:02:36.123 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:36.123 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:36.380 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:36.380 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:36.380 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:36.637 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:36.637 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:36.637 [221/710] Linking static target lib/librte_bpf.a 00:02:36.637 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:36.895 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.895 [224/710] Linking static target lib/librte_compressdev.a 00:02:36.895 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.895 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.153 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:37.153 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:37.153 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:37.153 [230/710] Linking static target lib/librte_distributor.a 00:02:37.410 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:37.410 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.410 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:37.410 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.410 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:37.668 [236/710] Linking static target lib/librte_dmadev.a 00:02:37.668 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:37.668 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:37.926 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.926 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:37.926 [241/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:38.184 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:38.184 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:38.441 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:38.442 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:38.442 [246/710] Linking static target lib/librte_efd.a 00:02:38.700 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.700 [248/710] Linking static target lib/librte_cryptodev.a 00:02:38.700 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.700 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:38.700 [251/710] Linking target lib/librte_efd.so.24.0 00:02:39.267 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:39.267 [253/710] Linking static target lib/librte_dispatcher.a 00:02:39.267 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:39.267 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.267 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:39.526 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:39.526 [258/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.526 [259/710] Linking target lib/librte_metrics.so.24.0 00:02:39.526 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:39.526 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:39.526 [262/710] Linking static target lib/librte_gpudev.a 00:02:39.526 [263/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.526 [264/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.526 [265/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:39.526 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:39.526 [267/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.526 [268/710] Linking target lib/librte_bitratestats.so.24.0 00:02:39.783 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:40.041 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.041 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:40.041 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:40.041 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:40.299 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:40.299 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.299 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:40.299 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:40.299 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:40.557 [279/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.557 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:40.557 [281/710] Linking static target lib/librte_gro.a 00:02:40.557 [282/710] Linking static target lib/librte_eventdev.a 00:02:40.557 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.557 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:40.557 [285/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.557 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:40.815 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:40.815 [288/710] Linking target lib/librte_gro.so.24.0 00:02:41.075 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:41.075 [290/710] Linking static target lib/librte_gso.a 00:02:41.075 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:41.075 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:41.075 [293/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.075 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:41.335 [295/710] Linking target lib/librte_gso.so.24.0 00:02:41.335 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:41.335 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:41.335 [298/710] Linking static target lib/librte_jobstats.a 00:02:41.335 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:41.594 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:41.594 [301/710] Linking static target lib/librte_latencystats.a 00:02:41.594 [302/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:41.594 [303/710] Linking static target lib/librte_ip_frag.a 00:02:41.594 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.594 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:41.594 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.852 [307/710] Linking target lib/librte_latencystats.so.24.0 00:02:41.852 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.852 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:41.852 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:41.852 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:41.852 [312/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:41.852 [313/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:42.110 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:42.110 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.110 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.110 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.368 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.633 [319/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:42.633 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:42.633 [321/710] Linking target lib/librte_eventdev.so.24.0 00:02:42.633 [322/710] Linking static target lib/librte_lpm.a 00:02:42.633 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.633 [324/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:42.633 [325/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.633 [326/710] Linking target lib/librte_dispatcher.so.24.0 00:02:42.633 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:42.896 [328/710] Linking static target lib/librte_pcapng.a 00:02:42.897 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.897 [330/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:42.897 [331/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.897 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:42.897 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.897 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.897 [335/710] Linking target lib/librte_pcapng.so.24.0 00:02:43.154 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:43.154 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:43.154 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:43.154 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:43.412 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.412 [341/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.412 [342/710] Linking static target lib/librte_power.a 00:02:43.669 [343/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:43.669 [344/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:43.669 [345/710] Linking static target lib/librte_member.a 00:02:43.669 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:43.669 [347/710] Linking static target lib/librte_regexdev.a 00:02:43.669 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:43.669 [349/710] Linking static target lib/librte_rawdev.a 00:02:43.669 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:43.927 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:43.927 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:43.927 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.927 [354/710] Linking target lib/librte_member.so.24.0 00:02:43.927 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:43.927 [356/710] Linking static target lib/librte_mldev.a 00:02:44.184 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.184 [358/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:44.184 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:44.184 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.184 [361/710] Linking target lib/librte_power.so.24.0 00:02:44.184 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:44.441 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.441 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:44.441 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:44.699 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.699 [367/710] Linking static target lib/librte_reorder.a 00:02:44.699 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.699 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:44.699 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:44.699 [371/710] Linking static target lib/librte_rib.a 00:02:44.699 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:44.699 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:44.956 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.956 [375/710] Linking target lib/librte_reorder.so.24.0 00:02:44.956 [376/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:44.956 [377/710] Linking static target lib/librte_stack.a 00:02:44.956 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.956 [379/710] Linking static target lib/librte_security.a 00:02:44.956 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:45.213 [381/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.213 [382/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.213 [383/710] Linking target lib/librte_stack.so.24.0 00:02:45.213 [384/710] Linking target lib/librte_rib.so.24.0 00:02:45.213 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.213 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:45.470 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:45.470 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.470 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.470 [390/710] Linking target lib/librte_security.so.24.0 00:02:45.470 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.470 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:45.727 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.727 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:45.984 [395/710] Linking static target lib/librte_sched.a 00:02:46.241 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.241 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.241 [398/710] Linking target lib/librte_sched.so.24.0 00:02:46.241 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.241 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.499 [401/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:46.499 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:46.756 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:46.756 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:47.014 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:47.014 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:47.272 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:47.272 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:47.272 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:47.531 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:47.531 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:47.531 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:47.531 [413/710] Linking static target lib/librte_ipsec.a 00:02:47.789 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:47.789 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:47.789 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.047 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:48.047 [418/710] Linking target lib/librte_ipsec.so.24.0 00:02:48.047 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:48.047 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:48.047 [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:48.047 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:48.047 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:48.981 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:48.981 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:48.981 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:48.981 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:48.981 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:48.981 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:48.981 [430/710] Linking static target lib/librte_fib.a 00:02:48.981 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:48.981 [432/710] Linking static target lib/librte_pdcp.a 00:02:49.238 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.238 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.496 [435/710] Linking target lib/librte_fib.so.24.0 00:02:49.496 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:49.496 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:50.062 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:50.062 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:50.062 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:50.062 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:50.062 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:50.320 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:50.320 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:50.320 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:50.578 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:50.578 [447/710] Linking static target lib/librte_port.a 00:02:50.835 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:50.835 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:50.835 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:50.835 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:51.093 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.093 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:51.093 [454/710] Linking target lib/librte_port.so.24.0 00:02:51.093 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:51.093 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:51.093 [457/710] Linking static target lib/librte_pdump.a 00:02:51.350 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:51.350 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.350 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.350 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:51.350 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:51.915 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:51.915 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:51.915 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:51.915 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:52.173 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:52.173 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:52.432 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:52.432 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:52.432 [471/710] Linking static target lib/librte_table.a 00:02:52.432 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:52.701 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:52.975 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [475/710] Linking target lib/librte_table.so.24.0 00:02:53.233 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:53.233 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:53.233 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:53.491 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:53.491 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:53.750 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:54.007 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:54.007 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:54.007 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:54.007 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:54.007 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:54.573 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:54.573 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:54.831 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:54.831 [490/710] Linking static target lib/librte_graph.a 00:02:54.831 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:54.831 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:54.831 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:55.396 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.396 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:55.396 [496/710] Linking target lib/librte_graph.so.24.0 00:02:55.396 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:55.396 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:55.658 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:55.916 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:55.916 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:55.916 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:55.916 [503/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:55.916 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:56.173 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.173 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:56.429 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.429 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:56.687 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.687 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.687 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.687 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:56.687 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.944 [514/710] Linking static target lib/librte_node.a 00:02:56.944 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.202 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.202 [517/710] Linking target lib/librte_node.so.24.0 00:02:57.202 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.202 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.460 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.460 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.460 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.460 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.460 [524/710] Linking static target drivers/librte_bus_vdev.a 00:02:57.460 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.460 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.460 [527/710] Linking static target drivers/librte_bus_pci.a 00:02:57.718 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.718 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.718 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:57.718 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.718 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:57.976 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.976 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.976 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:57.976 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.976 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.234 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.234 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:58.234 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.234 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.234 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:58.234 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.234 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.234 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:58.492 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:58.750 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:59.008 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:59.008 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:59.008 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:59.008 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:59.942 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.942 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:59.942 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:59.942 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:00.200 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:00.200 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:00.458 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:00.716 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:00.716 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.974 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:00.974 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:01.539 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:01.539 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:01.539 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:01.797 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:02.055 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:02.314 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:02.314 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:02.314 [570/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:02.314 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:02.314 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:02.314 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:02.880 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:02.880 [575/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.880 [576/710] Linking static target lib/librte_vhost.a 00:03:02.880 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:02.880 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:02.880 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:03.139 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:03.139 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:03.139 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:03.397 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:03.656 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:03.656 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:03.656 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:03.656 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:03.656 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:03.656 [589/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:03.656 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:03.656 [591/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:03.913 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.913 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.171 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:04.171 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:04.171 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.429 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:04.429 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:04.429 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:04.688 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:04.946 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:04.946 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:04.946 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:04.946 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:05.204 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:05.204 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:05.463 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:05.721 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:05.721 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:05.979 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:05.979 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:05.979 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:05.979 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:06.236 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:06.236 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:06.236 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:06.236 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:06.494 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:06.752 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:06.752 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:07.010 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:07.010 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.010 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:07.657 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:07.657 [625/710] Linking static target lib/librte_pipeline.a 00:03:07.915 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.915 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:07.915 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:07.915 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.174 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:08.174 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.174 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.432 [633/710] Linking target app/dpdk-dumpcap 00:03:08.432 [634/710] Linking target app/dpdk-graph 00:03:08.432 [635/710] Linking target app/dpdk-pdump 00:03:08.432 [636/710] Linking target app/dpdk-proc-info 00:03:08.691 [637/710] Linking target app/dpdk-test-acl 00:03:08.691 [638/710] Linking target app/dpdk-test-cmdline 00:03:08.691 [639/710] Linking target app/dpdk-test-compress-perf 00:03:08.949 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:08.949 [641/710] Linking target app/dpdk-test-dma-perf 00:03:08.949 [642/710] Linking target app/dpdk-test-fib 00:03:08.949 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:09.208 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:09.208 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:09.467 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:09.467 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:09.467 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:09.725 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:09.725 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:09.982 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:09.982 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:09.982 [653/710] Linking target app/dpdk-test-gpudev 00:03:09.982 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:09.982 [655/710] Linking target app/dpdk-test-eventdev 00:03:10.239 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:10.239 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:10.239 [658/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:10.497 [659/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.497 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:10.497 [661/710] Linking target lib/librte_pipeline.so.24.0 00:03:10.497 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:10.497 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:10.754 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:10.754 [665/710] Linking target app/dpdk-test-flow-perf 00:03:10.755 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:10.755 [667/710] Linking target app/dpdk-test-bbdev 00:03:10.755 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:11.013 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:11.271 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:11.271 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:11.271 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:11.271 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:11.530 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:11.530 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:11.787 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:11.787 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:12.044 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.301 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.301 [680/710] Linking target app/dpdk-test-pipeline 00:03:12.301 [681/710] Linking target app/dpdk-test-mldev 00:03:12.301 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:12.301 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:12.866 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:12.866 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:12.866 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:13.124 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.124 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:13.382 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:13.639 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.639 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:13.639 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:13.639 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.897 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:14.464 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:14.464 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.726 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.726 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.726 [699/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.726 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:14.985 [701/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:14.985 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.985 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.985 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:15.242 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:15.242 [706/710] Linking target app/dpdk-test-sad 00:03:15.500 [707/710] Linking target app/dpdk-test-regex 00:03:15.758 [708/710] Linking target app/dpdk-testpmd 00:03:15.758 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:16.324 [710/710] Linking target app/dpdk-test-security-perf 00:03:16.324 18:19:23 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:16.324 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:16.324 [0/1] Installing files. 00:03:16.586 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.586 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.588 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.589 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.590 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.591 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.591 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.591 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.851 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:17.114 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:17.114 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:17.114 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.114 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:17.114 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.116 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.117 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.118 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.118 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:17.118 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:17.118 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:17.118 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:17.118 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:17.118 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:17.118 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:17.118 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:17.118 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:17.118 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:17.118 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:17.118 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:17.118 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:17.118 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:17.118 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:17.118 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:17.118 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:17.118 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:17.118 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:17.118 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:17.118 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:17.118 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:17.118 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:17.118 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:17.118 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:17.118 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:17.118 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:17.118 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:17.118 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:17.118 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:17.118 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:17.118 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:17.118 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:17.118 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:17.118 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:17.118 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:17.118 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:17.118 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:17.118 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:17.118 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:17.118 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:17.118 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:17.118 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:17.118 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:17.118 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:17.118 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:17.118 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:17.118 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:17.118 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:17.118 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:17.118 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:17.118 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:17.118 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:17.118 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:17.118 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:17.118 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:17.118 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:17.118 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:17.118 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:17.118 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:17.118 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:17.118 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:17.118 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:17.118 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:17.118 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:17.118 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:17.118 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:17.118 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:17.119 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:17.119 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:17.119 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:17.119 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:17.119 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:17.119 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:17.119 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:17.119 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:17.119 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:17.119 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:17.119 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:17.119 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:17.119 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:17.119 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:17.119 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:17.119 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:17.119 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:17.119 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:17.119 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:17.119 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:17.119 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:17.119 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:17.119 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:17.119 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:17.119 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:17.119 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:17.119 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:17.119 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:17.119 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:17.119 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:17.119 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:17.119 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:17.119 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:17.119 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:17.119 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:17.119 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:17.119 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:17.119 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:17.119 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:17.119 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:17.119 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:17.119 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:17.119 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:17.119 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:17.119 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:17.119 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:17.119 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:17.119 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:17.119 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:17.119 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:17.119 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:17.119 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:17.119 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:17.119 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:17.119 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:17.119 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:17.119 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:17.119 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:17.119 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:17.119 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:17.119 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:17.119 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:17.119 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:17.119 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:17.119 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:17.119 18:19:24 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:17.119 18:19:24 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:17.119 18:19:24 -- common/autobuild_common.sh@200 -- $ cat 00:03:17.378 18:19:24 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.378 00:03:17.378 real 1m0.176s 00:03:17.378 user 7m22.110s 00:03:17.378 sys 1m7.001s 00:03:17.378 ************************************ 00:03:17.378 END TEST build_native_dpdk 00:03:17.378 ************************************ 00:03:17.378 18:19:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:17.378 18:19:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.378 18:19:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.378 18:19:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.378 18:19:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:17.378 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:17.637 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.637 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:17.637 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.896 Using 'verbs' RDMA provider 00:03:33.364 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:45.566 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:45.566 go version go1.21.1 linux/amd64 00:03:45.566 Creating mk/config.mk...done. 00:03:45.566 Creating mk/cc.flags.mk...done. 00:03:45.566 Type 'make' to build. 00:03:45.566 18:19:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:45.566 18:19:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:45.566 18:19:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:45.566 18:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.566 ************************************ 00:03:45.566 START TEST make 00:03:45.566 ************************************ 00:03:45.566 18:19:52 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:45.566 make[1]: Nothing to be done for 'all'. 00:04:12.132 CC lib/ut/ut.o 00:04:12.133 CC lib/ut_mock/mock.o 00:04:12.133 CC lib/log/log.o 00:04:12.133 CC lib/log/log_deprecated.o 00:04:12.133 CC lib/log/log_flags.o 00:04:12.133 LIB libspdk_ut_mock.a 00:04:12.133 SO libspdk_ut_mock.so.5.0 00:04:12.133 LIB libspdk_ut.a 00:04:12.133 LIB libspdk_log.a 00:04:12.133 SO libspdk_ut.so.1.0 00:04:12.133 SYMLINK libspdk_ut_mock.so 00:04:12.133 SO libspdk_log.so.6.1 00:04:12.133 SYMLINK libspdk_ut.so 00:04:12.133 SYMLINK libspdk_log.so 00:04:12.133 CC lib/dma/dma.o 00:04:12.133 CC lib/ioat/ioat.o 00:04:12.133 CC lib/util/base64.o 00:04:12.133 CC lib/util/bit_array.o 00:04:12.133 CC lib/util/cpuset.o 00:04:12.133 CC lib/util/crc16.o 00:04:12.133 CC lib/util/crc32.o 00:04:12.133 CC lib/util/crc32c.o 00:04:12.133 CXX lib/trace_parser/trace.o 00:04:12.133 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.133 CC lib/vfio_user/host/vfio_user.o 00:04:12.133 CC lib/util/crc32_ieee.o 00:04:12.133 CC lib/util/crc64.o 00:04:12.133 CC lib/util/dif.o 00:04:12.133 CC lib/util/fd.o 00:04:12.133 LIB libspdk_ioat.a 00:04:12.133 SO libspdk_ioat.so.6.0 00:04:12.133 CC lib/util/file.o 00:04:12.133 CC lib/util/hexlify.o 00:04:12.133 LIB libspdk_dma.a 00:04:12.133 SYMLINK libspdk_ioat.so 00:04:12.133 CC lib/util/iov.o 00:04:12.133 CC lib/util/math.o 00:04:12.133 CC lib/util/pipe.o 00:04:12.133 CC lib/util/strerror_tls.o 00:04:12.133 SO libspdk_dma.so.3.0 00:04:12.133 LIB libspdk_vfio_user.a 00:04:12.133 SYMLINK libspdk_dma.so 00:04:12.133 CC lib/util/string.o 00:04:12.133 CC lib/util/uuid.o 00:04:12.133 SO libspdk_vfio_user.so.4.0 00:04:12.133 CC lib/util/fd_group.o 00:04:12.133 CC lib/util/xor.o 00:04:12.133 CC lib/util/zipf.o 00:04:12.133 SYMLINK libspdk_vfio_user.so 00:04:12.133 LIB libspdk_util.a 00:04:12.133 SO libspdk_util.so.8.0 00:04:12.133 SYMLINK libspdk_util.so 00:04:12.133 CC lib/idxd/idxd.o 00:04:12.133 CC lib/env_dpdk/env.o 00:04:12.133 CC lib/json/json_parse.o 00:04:12.133 CC lib/json/json_util.o 00:04:12.133 CC lib/idxd/idxd_user.o 00:04:12.133 CC lib/json/json_write.o 00:04:12.133 CC lib/rdma/common.o 00:04:12.133 CC lib/conf/conf.o 00:04:12.133 CC lib/vmd/vmd.o 00:04:12.133 LIB libspdk_trace_parser.a 00:04:12.133 SO libspdk_trace_parser.so.4.0 00:04:12.133 LIB libspdk_conf.a 00:04:12.133 CC lib/rdma/rdma_verbs.o 00:04:12.133 SYMLINK libspdk_trace_parser.so 00:04:12.133 CC lib/vmd/led.o 00:04:12.133 CC lib/env_dpdk/memory.o 00:04:12.133 SO libspdk_conf.so.5.0 00:04:12.133 CC lib/idxd/idxd_kernel.o 00:04:12.133 SYMLINK libspdk_conf.so 00:04:12.133 CC lib/env_dpdk/pci.o 00:04:12.133 CC lib/env_dpdk/init.o 00:04:12.133 CC lib/env_dpdk/threads.o 00:04:12.133 LIB libspdk_json.a 00:04:12.133 LIB libspdk_rdma.a 00:04:12.133 SO libspdk_json.so.5.1 00:04:12.133 SO libspdk_rdma.so.5.0 00:04:12.133 CC lib/env_dpdk/pci_ioat.o 00:04:12.133 LIB libspdk_idxd.a 00:04:12.133 SYMLINK libspdk_json.so 00:04:12.133 CC lib/env_dpdk/pci_virtio.o 00:04:12.133 SYMLINK libspdk_rdma.so 00:04:12.133 CC lib/env_dpdk/pci_vmd.o 00:04:12.133 CC lib/env_dpdk/pci_idxd.o 00:04:12.133 SO libspdk_idxd.so.11.0 00:04:12.133 LIB libspdk_vmd.a 00:04:12.133 SYMLINK libspdk_idxd.so 00:04:12.133 CC lib/env_dpdk/pci_event.o 00:04:12.133 SO libspdk_vmd.so.5.0 00:04:12.133 CC lib/env_dpdk/sigbus_handler.o 00:04:12.133 CC lib/env_dpdk/pci_dpdk.o 00:04:12.133 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:12.133 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:12.133 SYMLINK libspdk_vmd.so 00:04:12.133 CC lib/jsonrpc/jsonrpc_server.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_client.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:12.391 LIB libspdk_jsonrpc.a 00:04:12.391 SO libspdk_jsonrpc.so.5.1 00:04:12.649 SYMLINK libspdk_jsonrpc.so 00:04:12.649 CC lib/rpc/rpc.o 00:04:12.907 LIB libspdk_env_dpdk.a 00:04:12.907 SO libspdk_env_dpdk.so.13.0 00:04:12.907 LIB libspdk_rpc.a 00:04:12.907 SO libspdk_rpc.so.5.0 00:04:13.165 SYMLINK libspdk_rpc.so 00:04:13.165 SYMLINK libspdk_env_dpdk.so 00:04:13.165 CC lib/trace/trace.o 00:04:13.165 CC lib/trace/trace_rpc.o 00:04:13.165 CC lib/trace/trace_flags.o 00:04:13.165 CC lib/sock/sock.o 00:04:13.165 CC lib/notify/notify.o 00:04:13.165 CC lib/notify/notify_rpc.o 00:04:13.165 CC lib/sock/sock_rpc.o 00:04:13.424 LIB libspdk_notify.a 00:04:13.424 SO libspdk_notify.so.5.0 00:04:13.424 LIB libspdk_trace.a 00:04:13.424 SYMLINK libspdk_notify.so 00:04:13.424 SO libspdk_trace.so.9.0 00:04:13.424 SYMLINK libspdk_trace.so 00:04:13.424 LIB libspdk_sock.a 00:04:13.682 SO libspdk_sock.so.8.0 00:04:13.682 SYMLINK libspdk_sock.so 00:04:13.682 CC lib/thread/thread.o 00:04:13.682 CC lib/thread/iobuf.o 00:04:13.940 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.940 CC lib/nvme/nvme_fabric.o 00:04:13.940 CC lib/nvme/nvme_ctrlr.o 00:04:13.940 CC lib/nvme/nvme_ns_cmd.o 00:04:13.940 CC lib/nvme/nvme_ns.o 00:04:13.940 CC lib/nvme/nvme_pcie_common.o 00:04:13.940 CC lib/nvme/nvme_pcie.o 00:04:13.940 CC lib/nvme/nvme_qpair.o 00:04:13.940 CC lib/nvme/nvme.o 00:04:14.506 CC lib/nvme/nvme_quirks.o 00:04:14.506 CC lib/nvme/nvme_transport.o 00:04:14.764 CC lib/nvme/nvme_discovery.o 00:04:14.764 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.764 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.764 CC lib/nvme/nvme_tcp.o 00:04:14.764 CC lib/nvme/nvme_opal.o 00:04:15.022 CC lib/nvme/nvme_io_msg.o 00:04:15.022 CC lib/nvme/nvme_poll_group.o 00:04:15.022 LIB libspdk_thread.a 00:04:15.280 SO libspdk_thread.so.9.0 00:04:15.280 CC lib/nvme/nvme_zns.o 00:04:15.280 SYMLINK libspdk_thread.so 00:04:15.280 CC lib/nvme/nvme_cuse.o 00:04:15.280 CC lib/accel/accel.o 00:04:15.538 CC lib/blob/blobstore.o 00:04:15.538 CC lib/init/json_config.o 00:04:15.538 CC lib/virtio/virtio.o 00:04:15.795 CC lib/virtio/virtio_vhost_user.o 00:04:15.795 CC lib/init/subsystem.o 00:04:15.795 CC lib/virtio/virtio_vfio_user.o 00:04:15.795 CC lib/accel/accel_rpc.o 00:04:15.795 CC lib/accel/accel_sw.o 00:04:15.795 CC lib/init/subsystem_rpc.o 00:04:16.052 CC lib/blob/request.o 00:04:16.053 CC lib/virtio/virtio_pci.o 00:04:16.053 CC lib/nvme/nvme_vfio_user.o 00:04:16.053 CC lib/nvme/nvme_rdma.o 00:04:16.053 CC lib/init/rpc.o 00:04:16.053 CC lib/blob/zeroes.o 00:04:16.053 CC lib/blob/blob_bs_dev.o 00:04:16.310 LIB libspdk_init.a 00:04:16.310 SO libspdk_init.so.4.0 00:04:16.310 LIB libspdk_virtio.a 00:04:16.310 SYMLINK libspdk_init.so 00:04:16.310 SO libspdk_virtio.so.6.0 00:04:16.310 LIB libspdk_accel.a 00:04:16.310 SYMLINK libspdk_virtio.so 00:04:16.310 SO libspdk_accel.so.14.0 00:04:16.567 CC lib/event/reactor.o 00:04:16.567 CC lib/event/app.o 00:04:16.567 CC lib/event/app_rpc.o 00:04:16.567 CC lib/event/log_rpc.o 00:04:16.567 CC lib/event/scheduler_static.o 00:04:16.568 SYMLINK libspdk_accel.so 00:04:16.568 CC lib/bdev/bdev.o 00:04:16.568 CC lib/bdev/bdev_rpc.o 00:04:16.568 CC lib/bdev/bdev_zone.o 00:04:16.568 CC lib/bdev/part.o 00:04:16.568 CC lib/bdev/scsi_nvme.o 00:04:16.825 LIB libspdk_event.a 00:04:16.825 SO libspdk_event.so.12.0 00:04:17.083 SYMLINK libspdk_event.so 00:04:17.341 LIB libspdk_nvme.a 00:04:17.599 SO libspdk_nvme.so.12.0 00:04:17.857 SYMLINK libspdk_nvme.so 00:04:18.115 LIB libspdk_blob.a 00:04:18.373 SO libspdk_blob.so.10.1 00:04:18.374 SYMLINK libspdk_blob.so 00:04:18.632 CC lib/blobfs/blobfs.o 00:04:18.632 CC lib/lvol/lvol.o 00:04:18.632 CC lib/blobfs/tree.o 00:04:19.202 LIB libspdk_bdev.a 00:04:19.202 LIB libspdk_blobfs.a 00:04:19.202 SO libspdk_bdev.so.14.0 00:04:19.460 SO libspdk_blobfs.so.9.0 00:04:19.460 SYMLINK libspdk_blobfs.so 00:04:19.460 LIB libspdk_lvol.a 00:04:19.460 SYMLINK libspdk_bdev.so 00:04:19.460 SO libspdk_lvol.so.9.1 00:04:19.460 SYMLINK libspdk_lvol.so 00:04:19.460 CC lib/scsi/dev.o 00:04:19.460 CC lib/scsi/lun.o 00:04:19.460 CC lib/scsi/port.o 00:04:19.460 CC lib/scsi/scsi.o 00:04:19.460 CC lib/scsi/scsi_bdev.o 00:04:19.460 CC lib/scsi/scsi_pr.o 00:04:19.460 CC lib/ftl/ftl_core.o 00:04:19.460 CC lib/ublk/ublk.o 00:04:19.460 CC lib/nvmf/ctrlr.o 00:04:19.717 CC lib/nbd/nbd.o 00:04:19.717 CC lib/ublk/ublk_rpc.o 00:04:19.717 CC lib/scsi/scsi_rpc.o 00:04:19.717 CC lib/nvmf/ctrlr_discovery.o 00:04:19.975 CC lib/scsi/task.o 00:04:19.975 CC lib/ftl/ftl_init.o 00:04:19.975 CC lib/nbd/nbd_rpc.o 00:04:19.975 CC lib/nvmf/ctrlr_bdev.o 00:04:19.975 CC lib/nvmf/subsystem.o 00:04:19.975 CC lib/nvmf/nvmf.o 00:04:19.975 CC lib/nvmf/nvmf_rpc.o 00:04:20.232 LIB libspdk_scsi.a 00:04:20.232 LIB libspdk_nbd.a 00:04:20.232 CC lib/ftl/ftl_layout.o 00:04:20.232 SO libspdk_nbd.so.6.0 00:04:20.232 SO libspdk_scsi.so.8.0 00:04:20.232 LIB libspdk_ublk.a 00:04:20.232 SO libspdk_ublk.so.2.0 00:04:20.232 SYMLINK libspdk_nbd.so 00:04:20.232 CC lib/nvmf/transport.o 00:04:20.232 CC lib/nvmf/tcp.o 00:04:20.232 SYMLINK libspdk_scsi.so 00:04:20.232 CC lib/nvmf/rdma.o 00:04:20.232 SYMLINK libspdk_ublk.so 00:04:20.490 CC lib/iscsi/conn.o 00:04:20.490 CC lib/ftl/ftl_debug.o 00:04:20.490 CC lib/iscsi/init_grp.o 00:04:20.747 CC lib/ftl/ftl_io.o 00:04:21.005 CC lib/iscsi/iscsi.o 00:04:21.005 CC lib/iscsi/md5.o 00:04:21.005 CC lib/ftl/ftl_sb.o 00:04:21.005 CC lib/vhost/vhost.o 00:04:21.005 CC lib/ftl/ftl_l2p.o 00:04:21.005 CC lib/ftl/ftl_l2p_flat.o 00:04:21.005 CC lib/ftl/ftl_nv_cache.o 00:04:21.005 CC lib/ftl/ftl_band.o 00:04:21.005 CC lib/vhost/vhost_rpc.o 00:04:21.262 CC lib/ftl/ftl_band_ops.o 00:04:21.262 CC lib/ftl/ftl_writer.o 00:04:21.262 CC lib/iscsi/param.o 00:04:21.520 CC lib/iscsi/portal_grp.o 00:04:21.520 CC lib/ftl/ftl_rq.o 00:04:21.520 CC lib/vhost/vhost_scsi.o 00:04:21.520 CC lib/iscsi/tgt_node.o 00:04:21.520 CC lib/vhost/vhost_blk.o 00:04:21.777 CC lib/vhost/rte_vhost_user.o 00:04:21.777 CC lib/iscsi/iscsi_subsystem.o 00:04:21.777 CC lib/iscsi/iscsi_rpc.o 00:04:21.777 CC lib/ftl/ftl_reloc.o 00:04:22.034 CC lib/ftl/ftl_l2p_cache.o 00:04:22.034 CC lib/iscsi/task.o 00:04:22.034 CC lib/ftl/ftl_p2l.o 00:04:22.034 CC lib/ftl/mngt/ftl_mngt.o 00:04:22.291 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:22.291 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:22.291 LIB libspdk_iscsi.a 00:04:22.291 LIB libspdk_nvmf.a 00:04:22.291 SO libspdk_iscsi.so.7.0 00:04:22.291 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:22.291 SO libspdk_nvmf.so.17.0 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.548 SYMLINK libspdk_iscsi.so 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.548 SYMLINK libspdk_nvmf.so 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.548 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.548 CC lib/ftl/utils/ftl_conf.o 00:04:22.805 CC lib/ftl/utils/ftl_md.o 00:04:22.805 CC lib/ftl/utils/ftl_mempool.o 00:04:22.805 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.805 LIB libspdk_vhost.a 00:04:22.805 CC lib/ftl/utils/ftl_property.o 00:04:22.805 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.805 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.805 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.805 SO libspdk_vhost.so.7.1 00:04:22.805 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.805 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.805 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.805 SYMLINK libspdk_vhost.so 00:04:22.805 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.062 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.062 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.062 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.062 CC lib/ftl/base/ftl_base_dev.o 00:04:23.062 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.062 CC lib/ftl/ftl_trace.o 00:04:23.320 LIB libspdk_ftl.a 00:04:23.577 SO libspdk_ftl.so.8.0 00:04:23.835 SYMLINK libspdk_ftl.so 00:04:24.147 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.147 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.147 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.147 CC module/blob/bdev/blob_bdev.o 00:04:24.147 CC module/accel/iaa/accel_iaa.o 00:04:24.147 CC module/accel/ioat/accel_ioat.o 00:04:24.147 CC module/accel/dsa/accel_dsa.o 00:04:24.147 CC module/accel/error/accel_error.o 00:04:24.147 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.147 CC module/sock/posix/posix.o 00:04:24.404 LIB libspdk_env_dpdk_rpc.a 00:04:24.404 LIB libspdk_scheduler_gscheduler.a 00:04:24.404 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.404 SO libspdk_env_dpdk_rpc.so.5.0 00:04:24.404 SO libspdk_scheduler_gscheduler.so.3.0 00:04:24.404 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.404 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:24.404 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.404 CC module/accel/error/accel_error_rpc.o 00:04:24.404 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.404 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.404 LIB libspdk_blob_bdev.a 00:04:24.404 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.404 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.404 LIB libspdk_scheduler_dynamic.a 00:04:24.404 SO libspdk_blob_bdev.so.10.1 00:04:24.404 SO libspdk_scheduler_dynamic.so.3.0 00:04:24.404 LIB libspdk_accel_ioat.a 00:04:24.404 LIB libspdk_accel_iaa.a 00:04:24.404 SYMLINK libspdk_blob_bdev.so 00:04:24.404 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.661 LIB libspdk_accel_error.a 00:04:24.661 LIB libspdk_accel_dsa.a 00:04:24.661 SO libspdk_accel_ioat.so.5.0 00:04:24.661 SO libspdk_accel_iaa.so.2.0 00:04:24.661 SO libspdk_accel_dsa.so.4.0 00:04:24.661 SO libspdk_accel_error.so.1.0 00:04:24.661 SYMLINK libspdk_accel_ioat.so 00:04:24.661 SYMLINK libspdk_accel_iaa.so 00:04:24.661 SYMLINK libspdk_accel_dsa.so 00:04:24.661 SYMLINK libspdk_accel_error.so 00:04:24.661 CC module/blobfs/bdev/blobfs_bdev.o 00:04:24.661 CC module/bdev/lvol/vbdev_lvol.o 00:04:24.661 CC module/bdev/delay/vbdev_delay.o 00:04:24.661 CC module/bdev/error/vbdev_error.o 00:04:24.661 CC module/bdev/malloc/bdev_malloc.o 00:04:24.661 CC module/bdev/null/bdev_null.o 00:04:24.661 CC module/bdev/gpt/gpt.o 00:04:24.661 CC module/bdev/nvme/bdev_nvme.o 00:04:24.661 CC module/bdev/passthru/vbdev_passthru.o 00:04:24.918 LIB libspdk_sock_posix.a 00:04:24.918 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:24.918 CC module/bdev/gpt/vbdev_gpt.o 00:04:24.918 SO libspdk_sock_posix.so.5.0 00:04:24.918 CC module/bdev/null/bdev_null_rpc.o 00:04:24.918 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.918 SYMLINK libspdk_sock_posix.so 00:04:24.918 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.175 LIB libspdk_blobfs_bdev.a 00:04:25.175 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.175 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.175 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.175 SO libspdk_blobfs_bdev.so.5.0 00:04:25.175 LIB libspdk_bdev_error.a 00:04:25.175 LIB libspdk_bdev_null.a 00:04:25.175 SYMLINK libspdk_blobfs_bdev.so 00:04:25.175 CC module/bdev/nvme/nvme_rpc.o 00:04:25.175 LIB libspdk_bdev_gpt.a 00:04:25.175 SO libspdk_bdev_error.so.5.0 00:04:25.175 SO libspdk_bdev_null.so.5.0 00:04:25.175 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.175 SO libspdk_bdev_gpt.so.5.0 00:04:25.175 LIB libspdk_bdev_passthru.a 00:04:25.175 LIB libspdk_bdev_malloc.a 00:04:25.175 LIB libspdk_bdev_delay.a 00:04:25.175 SYMLINK libspdk_bdev_null.so 00:04:25.175 SYMLINK libspdk_bdev_error.so 00:04:25.175 SO libspdk_bdev_malloc.so.5.0 00:04:25.175 SO libspdk_bdev_passthru.so.5.0 00:04:25.175 SYMLINK libspdk_bdev_gpt.so 00:04:25.432 SO libspdk_bdev_delay.so.5.0 00:04:25.432 SYMLINK libspdk_bdev_malloc.so 00:04:25.432 SYMLINK libspdk_bdev_passthru.so 00:04:25.432 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.432 SYMLINK libspdk_bdev_delay.so 00:04:25.432 CC module/bdev/split/vbdev_split.o 00:04:25.432 CC module/bdev/raid/bdev_raid.o 00:04:25.432 CC module/bdev/nvme/vbdev_opal.o 00:04:25.433 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:25.433 CC module/bdev/aio/bdev_aio.o 00:04:25.433 CC module/bdev/ftl/bdev_ftl.o 00:04:25.433 LIB libspdk_bdev_lvol.a 00:04:25.433 SO libspdk_bdev_lvol.so.5.0 00:04:25.691 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.691 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.691 SYMLINK libspdk_bdev_lvol.so 00:04:25.691 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:25.691 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.691 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:25.691 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:25.691 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:25.691 CC module/bdev/aio/bdev_aio_rpc.o 00:04:25.691 LIB libspdk_bdev_split.a 00:04:25.949 SO libspdk_bdev_split.so.5.0 00:04:25.949 LIB libspdk_bdev_ftl.a 00:04:25.949 CC module/bdev/raid/raid0.o 00:04:25.949 CC module/bdev/raid/raid1.o 00:04:25.949 SO libspdk_bdev_ftl.so.5.0 00:04:25.949 SYMLINK libspdk_bdev_split.so 00:04:25.949 LIB libspdk_bdev_zone_block.a 00:04:25.949 SYMLINK libspdk_bdev_ftl.so 00:04:25.949 LIB libspdk_bdev_aio.a 00:04:25.949 CC module/bdev/raid/concat.o 00:04:25.949 SO libspdk_bdev_zone_block.so.5.0 00:04:25.949 SO libspdk_bdev_aio.so.5.0 00:04:25.949 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:25.949 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:25.949 CC module/bdev/iscsi/bdev_iscsi.o 00:04:25.949 SYMLINK libspdk_bdev_zone_block.so 00:04:25.949 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:25.949 SYMLINK libspdk_bdev_aio.so 00:04:25.949 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:26.207 LIB libspdk_bdev_raid.a 00:04:26.207 SO libspdk_bdev_raid.so.5.0 00:04:26.466 LIB libspdk_bdev_iscsi.a 00:04:26.466 SYMLINK libspdk_bdev_raid.so 00:04:26.466 SO libspdk_bdev_iscsi.so.5.0 00:04:26.466 SYMLINK libspdk_bdev_iscsi.so 00:04:26.466 LIB libspdk_bdev_virtio.a 00:04:26.466 SO libspdk_bdev_virtio.so.5.0 00:04:26.724 SYMLINK libspdk_bdev_virtio.so 00:04:26.983 LIB libspdk_bdev_nvme.a 00:04:26.983 SO libspdk_bdev_nvme.so.6.0 00:04:26.983 SYMLINK libspdk_bdev_nvme.so 00:04:27.548 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.548 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.548 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.548 CC module/event/subsystems/sock/sock.o 00:04:27.548 CC module/event/subsystems/vmd/vmd.o 00:04:27.548 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.548 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.548 LIB libspdk_event_vhost_blk.a 00:04:27.548 LIB libspdk_event_iobuf.a 00:04:27.548 LIB libspdk_event_scheduler.a 00:04:27.548 LIB libspdk_event_sock.a 00:04:27.548 LIB libspdk_event_vmd.a 00:04:27.548 SO libspdk_event_vhost_blk.so.2.0 00:04:27.548 SO libspdk_event_scheduler.so.3.0 00:04:27.548 SO libspdk_event_iobuf.so.2.0 00:04:27.548 SO libspdk_event_sock.so.4.0 00:04:27.548 SO libspdk_event_vmd.so.5.0 00:04:27.549 SYMLINK libspdk_event_vhost_blk.so 00:04:27.549 SYMLINK libspdk_event_scheduler.so 00:04:27.549 SYMLINK libspdk_event_iobuf.so 00:04:27.549 SYMLINK libspdk_event_sock.so 00:04:27.549 SYMLINK libspdk_event_vmd.so 00:04:27.807 CC module/event/subsystems/accel/accel.o 00:04:28.065 LIB libspdk_event_accel.a 00:04:28.065 SO libspdk_event_accel.so.5.0 00:04:28.065 SYMLINK libspdk_event_accel.so 00:04:28.322 CC module/event/subsystems/bdev/bdev.o 00:04:28.580 LIB libspdk_event_bdev.a 00:04:28.580 SO libspdk_event_bdev.so.5.0 00:04:28.580 SYMLINK libspdk_event_bdev.so 00:04:28.839 CC module/event/subsystems/nbd/nbd.o 00:04:28.839 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.839 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.839 CC module/event/subsystems/scsi/scsi.o 00:04:28.839 CC module/event/subsystems/ublk/ublk.o 00:04:29.097 LIB libspdk_event_nbd.a 00:04:29.097 LIB libspdk_event_ublk.a 00:04:29.098 LIB libspdk_event_scsi.a 00:04:29.098 SO libspdk_event_ublk.so.2.0 00:04:29.098 SO libspdk_event_nbd.so.5.0 00:04:29.098 SO libspdk_event_scsi.so.5.0 00:04:29.098 LIB libspdk_event_nvmf.a 00:04:29.098 SYMLINK libspdk_event_ublk.so 00:04:29.098 SYMLINK libspdk_event_nbd.so 00:04:29.098 SYMLINK libspdk_event_scsi.so 00:04:29.098 SO libspdk_event_nvmf.so.5.0 00:04:29.098 SYMLINK libspdk_event_nvmf.so 00:04:29.356 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.356 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.356 LIB libspdk_event_vhost_scsi.a 00:04:29.356 LIB libspdk_event_iscsi.a 00:04:29.356 SO libspdk_event_vhost_scsi.so.2.0 00:04:29.614 SO libspdk_event_iscsi.so.5.0 00:04:29.614 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.614 SYMLINK libspdk_event_iscsi.so 00:04:29.614 SO libspdk.so.5.0 00:04:29.614 SYMLINK libspdk.so 00:04:29.873 TEST_HEADER include/spdk/accel.h 00:04:29.873 CXX app/trace/trace.o 00:04:29.873 TEST_HEADER include/spdk/accel_module.h 00:04:29.873 TEST_HEADER include/spdk/assert.h 00:04:29.873 TEST_HEADER include/spdk/barrier.h 00:04:29.873 TEST_HEADER include/spdk/base64.h 00:04:29.873 TEST_HEADER include/spdk/bdev.h 00:04:29.873 TEST_HEADER include/spdk/bdev_module.h 00:04:29.873 TEST_HEADER include/spdk/bdev_zone.h 00:04:29.873 TEST_HEADER include/spdk/bit_array.h 00:04:29.873 TEST_HEADER include/spdk/bit_pool.h 00:04:29.873 TEST_HEADER include/spdk/blob_bdev.h 00:04:29.873 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:29.873 TEST_HEADER include/spdk/blobfs.h 00:04:29.873 TEST_HEADER include/spdk/blob.h 00:04:29.873 TEST_HEADER include/spdk/conf.h 00:04:29.873 TEST_HEADER include/spdk/config.h 00:04:29.873 TEST_HEADER include/spdk/cpuset.h 00:04:29.873 TEST_HEADER include/spdk/crc16.h 00:04:29.873 TEST_HEADER include/spdk/crc32.h 00:04:29.873 TEST_HEADER include/spdk/crc64.h 00:04:29.873 TEST_HEADER include/spdk/dif.h 00:04:29.873 TEST_HEADER include/spdk/dma.h 00:04:29.873 TEST_HEADER include/spdk/endian.h 00:04:29.873 TEST_HEADER include/spdk/env_dpdk.h 00:04:29.873 CC examples/accel/perf/accel_perf.o 00:04:29.873 TEST_HEADER include/spdk/env.h 00:04:29.873 TEST_HEADER include/spdk/event.h 00:04:29.873 TEST_HEADER include/spdk/fd_group.h 00:04:29.873 TEST_HEADER include/spdk/fd.h 00:04:29.873 TEST_HEADER include/spdk/file.h 00:04:29.873 TEST_HEADER include/spdk/ftl.h 00:04:29.873 TEST_HEADER include/spdk/gpt_spec.h 00:04:29.873 CC examples/ioat/perf/perf.o 00:04:29.873 TEST_HEADER include/spdk/hexlify.h 00:04:29.873 TEST_HEADER include/spdk/histogram_data.h 00:04:29.873 TEST_HEADER include/spdk/idxd.h 00:04:29.873 CC test/bdev/bdevio/bdevio.o 00:04:29.873 TEST_HEADER include/spdk/idxd_spec.h 00:04:29.873 TEST_HEADER include/spdk/init.h 00:04:29.873 TEST_HEADER include/spdk/ioat.h 00:04:29.873 CC test/blobfs/mkfs/mkfs.o 00:04:29.873 TEST_HEADER include/spdk/ioat_spec.h 00:04:29.873 CC examples/blob/hello_world/hello_blob.o 00:04:29.873 TEST_HEADER include/spdk/iscsi_spec.h 00:04:29.873 TEST_HEADER include/spdk/json.h 00:04:29.873 TEST_HEADER include/spdk/jsonrpc.h 00:04:29.873 TEST_HEADER include/spdk/likely.h 00:04:29.873 TEST_HEADER include/spdk/log.h 00:04:29.873 TEST_HEADER include/spdk/lvol.h 00:04:29.873 CC test/accel/dif/dif.o 00:04:29.873 TEST_HEADER include/spdk/memory.h 00:04:29.873 TEST_HEADER include/spdk/mmio.h 00:04:29.873 TEST_HEADER include/spdk/nbd.h 00:04:29.873 CC examples/bdev/hello_world/hello_bdev.o 00:04:29.873 TEST_HEADER include/spdk/notify.h 00:04:29.873 TEST_HEADER include/spdk/nvme.h 00:04:29.873 TEST_HEADER include/spdk/nvme_intel.h 00:04:29.873 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:29.873 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:29.873 TEST_HEADER include/spdk/nvme_spec.h 00:04:29.873 TEST_HEADER include/spdk/nvme_zns.h 00:04:29.873 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:29.873 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:29.873 TEST_HEADER include/spdk/nvmf.h 00:04:29.873 TEST_HEADER include/spdk/nvmf_spec.h 00:04:29.873 TEST_HEADER include/spdk/nvmf_transport.h 00:04:29.873 TEST_HEADER include/spdk/opal.h 00:04:30.131 TEST_HEADER include/spdk/opal_spec.h 00:04:30.131 TEST_HEADER include/spdk/pci_ids.h 00:04:30.131 TEST_HEADER include/spdk/pipe.h 00:04:30.131 TEST_HEADER include/spdk/queue.h 00:04:30.131 TEST_HEADER include/spdk/reduce.h 00:04:30.131 TEST_HEADER include/spdk/rpc.h 00:04:30.131 TEST_HEADER include/spdk/scheduler.h 00:04:30.131 CC test/app/bdev_svc/bdev_svc.o 00:04:30.131 TEST_HEADER include/spdk/scsi.h 00:04:30.131 TEST_HEADER include/spdk/scsi_spec.h 00:04:30.131 TEST_HEADER include/spdk/sock.h 00:04:30.131 TEST_HEADER include/spdk/stdinc.h 00:04:30.131 TEST_HEADER include/spdk/string.h 00:04:30.131 TEST_HEADER include/spdk/thread.h 00:04:30.131 TEST_HEADER include/spdk/trace.h 00:04:30.131 TEST_HEADER include/spdk/trace_parser.h 00:04:30.131 TEST_HEADER include/spdk/tree.h 00:04:30.131 TEST_HEADER include/spdk/ublk.h 00:04:30.131 TEST_HEADER include/spdk/util.h 00:04:30.131 TEST_HEADER include/spdk/uuid.h 00:04:30.131 TEST_HEADER include/spdk/version.h 00:04:30.131 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:30.131 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:30.131 TEST_HEADER include/spdk/vhost.h 00:04:30.131 TEST_HEADER include/spdk/vmd.h 00:04:30.131 TEST_HEADER include/spdk/xor.h 00:04:30.131 TEST_HEADER include/spdk/zipf.h 00:04:30.131 CXX test/cpp_headers/accel.o 00:04:30.131 LINK mkfs 00:04:30.131 LINK hello_blob 00:04:30.131 LINK hello_bdev 00:04:30.389 LINK ioat_perf 00:04:30.389 LINK bdev_svc 00:04:30.389 LINK accel_perf 00:04:30.389 LINK spdk_trace 00:04:30.389 CXX test/cpp_headers/accel_module.o 00:04:30.389 LINK dif 00:04:30.389 LINK bdevio 00:04:30.389 CC examples/ioat/verify/verify.o 00:04:30.647 CXX test/cpp_headers/assert.o 00:04:30.647 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.647 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.647 CC examples/blob/cli/blobcli.o 00:04:30.647 CC test/app/histogram_perf/histogram_perf.o 00:04:30.647 CC app/trace_record/trace_record.o 00:04:30.647 CC test/app/jsoncat/jsoncat.o 00:04:30.647 CC examples/nvme/hello_world/hello_world.o 00:04:30.647 CXX test/cpp_headers/barrier.o 00:04:30.647 LINK verify 00:04:30.647 CC examples/nvme/reconnect/reconnect.o 00:04:30.647 LINK histogram_perf 00:04:30.905 LINK jsoncat 00:04:30.905 CXX test/cpp_headers/base64.o 00:04:30.905 CXX test/cpp_headers/bdev.o 00:04:30.905 LINK spdk_trace_record 00:04:30.905 CXX test/cpp_headers/bdev_module.o 00:04:30.905 LINK hello_world 00:04:30.905 CXX test/cpp_headers/bdev_zone.o 00:04:30.905 LINK nvme_fuzz 00:04:31.163 LINK blobcli 00:04:31.163 LINK reconnect 00:04:31.163 CXX test/cpp_headers/bit_array.o 00:04:31.163 CC app/nvmf_tgt/nvmf_main.o 00:04:31.163 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.163 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.163 CC test/dma/test_dma/test_dma.o 00:04:31.163 CC app/spdk_tgt/spdk_tgt.o 00:04:31.163 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.421 LINK bdevperf 00:04:31.421 CXX test/cpp_headers/bit_pool.o 00:04:31.421 CC app/spdk_lspci/spdk_lspci.o 00:04:31.421 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:31.421 LINK nvmf_tgt 00:04:31.421 LINK iscsi_tgt 00:04:31.421 LINK spdk_lspci 00:04:31.421 LINK spdk_tgt 00:04:31.421 CXX test/cpp_headers/blob_bdev.o 00:04:31.421 CXX test/cpp_headers/blobfs_bdev.o 00:04:31.680 CXX test/cpp_headers/blobfs.o 00:04:31.680 LINK test_dma 00:04:31.680 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.680 CXX test/cpp_headers/blob.o 00:04:31.680 CC app/spdk_nvme_perf/perf.o 00:04:31.680 CC examples/nvme/arbitration/arbitration.o 00:04:31.939 CC test/event/event_perf/event_perf.o 00:04:31.939 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.939 LINK nvme_manage 00:04:31.939 CC test/event/reactor/reactor.o 00:04:31.939 CXX test/cpp_headers/conf.o 00:04:31.939 LINK mem_callbacks 00:04:31.939 CC test/lvol/esnap/esnap.o 00:04:31.939 LINK event_perf 00:04:31.939 LINK reactor 00:04:31.939 CXX test/cpp_headers/config.o 00:04:32.197 CXX test/cpp_headers/cpuset.o 00:04:32.197 CC test/env/vtophys/vtophys.o 00:04:32.197 LINK arbitration 00:04:32.197 CC examples/sock/hello_world/hello_sock.o 00:04:32.197 LINK vhost_fuzz 00:04:32.197 LINK vtophys 00:04:32.197 CXX test/cpp_headers/crc16.o 00:04:32.197 CC test/event/reactor_perf/reactor_perf.o 00:04:32.197 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.455 CC examples/nvme/hotplug/hotplug.o 00:04:32.455 LINK lsvmd 00:04:32.455 LINK reactor_perf 00:04:32.455 CXX test/cpp_headers/crc32.o 00:04:32.455 LINK hello_sock 00:04:32.455 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.455 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:32.455 LINK spdk_nvme_perf 00:04:32.714 CXX test/cpp_headers/crc64.o 00:04:32.714 LINK hotplug 00:04:32.714 CC examples/vmd/led/led.o 00:04:32.714 LINK env_dpdk_post_init 00:04:32.714 LINK cmb_copy 00:04:32.714 CC test/event/app_repeat/app_repeat.o 00:04:32.714 CC examples/nvme/abort/abort.o 00:04:32.714 CC app/spdk_nvme_identify/identify.o 00:04:32.714 CXX test/cpp_headers/dif.o 00:04:32.714 CXX test/cpp_headers/dma.o 00:04:32.714 LINK led 00:04:32.714 LINK iscsi_fuzz 00:04:32.971 CXX test/cpp_headers/endian.o 00:04:32.971 LINK app_repeat 00:04:32.971 CC test/env/memory/memory_ut.o 00:04:32.971 CC test/app/stub/stub.o 00:04:32.971 CXX test/cpp_headers/env_dpdk.o 00:04:32.971 CXX test/cpp_headers/env.o 00:04:33.229 CC test/nvme/aer/aer.o 00:04:33.229 LINK abort 00:04:33.229 CC examples/nvmf/nvmf/nvmf.o 00:04:33.229 CC test/event/scheduler/scheduler.o 00:04:33.229 LINK stub 00:04:33.229 CXX test/cpp_headers/event.o 00:04:33.488 CC app/spdk_nvme_discover/discovery_aer.o 00:04:33.488 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.488 LINK aer 00:04:33.488 LINK scheduler 00:04:33.488 LINK nvmf 00:04:33.488 CXX test/cpp_headers/fd_group.o 00:04:33.488 LINK spdk_nvme_discover 00:04:33.488 LINK spdk_nvme_identify 00:04:33.488 LINK pmr_persistence 00:04:33.488 CC test/nvme/reset/reset.o 00:04:33.746 CXX test/cpp_headers/fd.o 00:04:33.746 CC test/env/pci/pci_ut.o 00:04:33.746 CC test/rpc_client/rpc_client_test.o 00:04:33.746 CXX test/cpp_headers/file.o 00:04:33.746 LINK memory_ut 00:04:33.746 CC test/nvme/sgl/sgl.o 00:04:33.746 CC app/spdk_top/spdk_top.o 00:04:33.746 CC test/thread/poller_perf/poller_perf.o 00:04:34.004 LINK reset 00:04:34.004 CC examples/util/zipf/zipf.o 00:04:34.004 LINK rpc_client_test 00:04:34.004 CXX test/cpp_headers/ftl.o 00:04:34.004 LINK poller_perf 00:04:34.004 LINK zipf 00:04:34.005 LINK sgl 00:04:34.005 LINK pci_ut 00:04:34.262 CC app/vhost/vhost.o 00:04:34.263 CC app/spdk_dd/spdk_dd.o 00:04:34.263 CXX test/cpp_headers/gpt_spec.o 00:04:34.263 CC app/fio/nvme/fio_plugin.o 00:04:34.263 CC app/fio/bdev/fio_plugin.o 00:04:34.263 LINK vhost 00:04:34.263 CC test/nvme/e2edp/nvme_dp.o 00:04:34.263 CXX test/cpp_headers/hexlify.o 00:04:34.263 CC examples/thread/thread/thread_ex.o 00:04:34.520 CXX test/cpp_headers/histogram_data.o 00:04:34.521 LINK spdk_dd 00:04:34.521 CXX test/cpp_headers/idxd.o 00:04:34.521 CXX test/cpp_headers/idxd_spec.o 00:04:34.521 LINK nvme_dp 00:04:34.779 CC examples/idxd/perf/perf.o 00:04:34.779 LINK spdk_top 00:04:34.779 LINK thread 00:04:34.779 LINK spdk_nvme 00:04:34.779 CXX test/cpp_headers/init.o 00:04:34.779 LINK spdk_bdev 00:04:34.779 CC test/nvme/overhead/overhead.o 00:04:34.779 CC test/nvme/err_injection/err_injection.o 00:04:35.037 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.037 CC test/nvme/startup/startup.o 00:04:35.037 CXX test/cpp_headers/ioat.o 00:04:35.037 CC test/nvme/reserve/reserve.o 00:04:35.037 CC test/nvme/simple_copy/simple_copy.o 00:04:35.037 LINK idxd_perf 00:04:35.037 LINK err_injection 00:04:35.037 LINK overhead 00:04:35.037 LINK startup 00:04:35.037 LINK interrupt_tgt 00:04:35.037 CXX test/cpp_headers/ioat_spec.o 00:04:35.295 CXX test/cpp_headers/iscsi_spec.o 00:04:35.295 LINK simple_copy 00:04:35.295 LINK reserve 00:04:35.295 CC test/nvme/connect_stress/connect_stress.o 00:04:35.295 CXX test/cpp_headers/json.o 00:04:35.295 CC test/nvme/boot_partition/boot_partition.o 00:04:35.295 CC test/nvme/compliance/nvme_compliance.o 00:04:35.295 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.295 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.553 CC test/nvme/fdp/fdp.o 00:04:35.553 CC test/nvme/cuse/cuse.o 00:04:35.553 CXX test/cpp_headers/jsonrpc.o 00:04:35.553 LINK connect_stress 00:04:35.553 LINK boot_partition 00:04:35.553 LINK fused_ordering 00:04:35.553 LINK doorbell_aers 00:04:35.811 LINK nvme_compliance 00:04:35.811 CXX test/cpp_headers/likely.o 00:04:35.811 CXX test/cpp_headers/log.o 00:04:35.811 CXX test/cpp_headers/lvol.o 00:04:35.811 CXX test/cpp_headers/memory.o 00:04:35.811 CXX test/cpp_headers/mmio.o 00:04:35.811 LINK fdp 00:04:35.811 CXX test/cpp_headers/nbd.o 00:04:35.811 CXX test/cpp_headers/notify.o 00:04:36.068 CXX test/cpp_headers/nvme.o 00:04:36.068 CXX test/cpp_headers/nvme_intel.o 00:04:36.068 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.068 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.068 CXX test/cpp_headers/nvme_spec.o 00:04:36.068 CXX test/cpp_headers/nvme_zns.o 00:04:36.068 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.068 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.068 CXX test/cpp_headers/nvmf.o 00:04:36.325 CXX test/cpp_headers/nvmf_spec.o 00:04:36.325 CXX test/cpp_headers/nvmf_transport.o 00:04:36.325 CXX test/cpp_headers/opal.o 00:04:36.325 CXX test/cpp_headers/opal_spec.o 00:04:36.325 CXX test/cpp_headers/pci_ids.o 00:04:36.325 CXX test/cpp_headers/pipe.o 00:04:36.325 CXX test/cpp_headers/queue.o 00:04:36.325 CXX test/cpp_headers/reduce.o 00:04:36.325 CXX test/cpp_headers/rpc.o 00:04:36.581 CXX test/cpp_headers/scheduler.o 00:04:36.581 CXX test/cpp_headers/scsi.o 00:04:36.581 CXX test/cpp_headers/scsi_spec.o 00:04:36.581 CXX test/cpp_headers/sock.o 00:04:36.581 CXX test/cpp_headers/stdinc.o 00:04:36.582 CXX test/cpp_headers/string.o 00:04:36.582 CXX test/cpp_headers/thread.o 00:04:36.582 LINK cuse 00:04:36.582 LINK esnap 00:04:36.582 CXX test/cpp_headers/trace.o 00:04:36.582 CXX test/cpp_headers/trace_parser.o 00:04:36.582 CXX test/cpp_headers/tree.o 00:04:36.839 CXX test/cpp_headers/ublk.o 00:04:36.839 CXX test/cpp_headers/util.o 00:04:36.839 CXX test/cpp_headers/uuid.o 00:04:36.839 CXX test/cpp_headers/version.o 00:04:36.839 CXX test/cpp_headers/vfio_user_pci.o 00:04:36.839 CXX test/cpp_headers/vfio_user_spec.o 00:04:36.839 CXX test/cpp_headers/vhost.o 00:04:36.839 CXX test/cpp_headers/vmd.o 00:04:36.839 CXX test/cpp_headers/xor.o 00:04:36.839 CXX test/cpp_headers/zipf.o 00:04:42.125 00:04:42.125 real 0m56.169s 00:04:42.125 user 5m16.201s 00:04:42.125 sys 1m3.440s 00:04:42.125 ************************************ 00:04:42.125 END TEST make 00:04:42.125 ************************************ 00:04:42.125 18:20:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:42.125 18:20:48 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.125 18:20:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.125 18:20:48 -- nvmf/common.sh@7 -- # uname -s 00:04:42.125 18:20:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.125 18:20:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.125 18:20:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.125 18:20:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.125 18:20:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.125 18:20:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.125 18:20:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.125 18:20:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.125 18:20:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.125 18:20:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.125 18:20:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:04:42.125 18:20:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:04:42.125 18:20:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.125 18:20:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.125 18:20:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:42.125 18:20:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.125 18:20:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.125 18:20:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.125 18:20:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.125 18:20:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.125 18:20:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.125 18:20:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.125 18:20:48 -- paths/export.sh@5 -- # export PATH 00:04:42.125 18:20:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.125 18:20:48 -- nvmf/common.sh@46 -- # : 0 00:04:42.125 18:20:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:42.125 18:20:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:42.125 18:20:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:42.125 18:20:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.125 18:20:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.125 18:20:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:42.125 18:20:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:42.125 18:20:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:42.125 18:20:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.125 18:20:48 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.125 18:20:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.125 18:20:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.125 18:20:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.125 18:20:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.125 18:20:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.125 18:20:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.125 18:20:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.125 18:20:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.125 18:20:48 -- spdk/autotest.sh@48 -- # udevadm_pid=61821 00:04:42.125 18:20:48 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.125 18:20:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.125 18:20:48 -- spdk/autotest.sh@54 -- # echo 61831 00:04:42.125 18:20:48 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.125 18:20:48 -- spdk/autotest.sh@56 -- # echo 61833 00:04:42.125 18:20:48 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.125 18:20:48 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:42.125 18:20:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:42.125 18:20:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:42.125 18:20:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:42.125 18:20:48 -- common/autotest_common.sh@10 -- # set +x 00:04:42.125 18:20:48 -- spdk/autotest.sh@70 -- # create_test_list 00:04:42.125 18:20:48 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:42.125 18:20:48 -- common/autotest_common.sh@10 -- # set +x 00:04:42.125 18:20:48 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:42.125 18:20:48 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:42.125 18:20:48 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:42.125 18:20:48 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.125 18:20:48 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:42.125 18:20:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:42.125 18:20:48 -- common/autotest_common.sh@1440 -- # uname 00:04:42.125 18:20:48 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:42.125 18:20:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:42.125 18:20:48 -- common/autotest_common.sh@1460 -- # uname 00:04:42.125 18:20:48 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:42.126 18:20:48 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:42.126 18:20:48 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:42.126 18:20:48 -- spdk/autotest.sh@83 -- # hash lcov 00:04:42.126 18:20:48 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:42.126 18:20:48 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:42.126 --rc lcov_branch_coverage=1 00:04:42.126 --rc lcov_function_coverage=1 00:04:42.126 --rc genhtml_branch_coverage=1 00:04:42.126 --rc genhtml_function_coverage=1 00:04:42.126 --rc genhtml_legend=1 00:04:42.126 --rc geninfo_all_blocks=1 00:04:42.126 ' 00:04:42.126 18:20:48 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:42.126 --rc lcov_branch_coverage=1 00:04:42.126 --rc lcov_function_coverage=1 00:04:42.126 --rc genhtml_branch_coverage=1 00:04:42.126 --rc genhtml_function_coverage=1 00:04:42.126 --rc genhtml_legend=1 00:04:42.126 --rc geninfo_all_blocks=1 00:04:42.126 ' 00:04:42.126 18:20:48 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:42.126 --rc lcov_branch_coverage=1 00:04:42.126 --rc lcov_function_coverage=1 00:04:42.126 --rc genhtml_branch_coverage=1 00:04:42.126 --rc genhtml_function_coverage=1 00:04:42.126 --rc genhtml_legend=1 00:04:42.126 --rc geninfo_all_blocks=1 00:04:42.126 --no-external' 00:04:42.126 18:20:48 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:42.126 --rc lcov_branch_coverage=1 00:04:42.126 --rc lcov_function_coverage=1 00:04:42.126 --rc genhtml_branch_coverage=1 00:04:42.126 --rc genhtml_function_coverage=1 00:04:42.126 --rc genhtml_legend=1 00:04:42.126 --rc geninfo_all_blocks=1 00:04:42.126 --no-external' 00:04:42.126 18:20:48 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:42.126 lcov: LCOV version 1.14 00:04:42.126 18:20:49 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.235 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:50.235 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:50.235 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:50.235 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:50.235 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:50.235 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:08.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:08.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:08.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:08.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:08.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:08.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:11.600 18:21:18 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:11.600 18:21:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.600 18:21:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.600 18:21:18 -- spdk/autotest.sh@102 -- # rm -f 00:05:11.600 18:21:18 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.118 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:12.118 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:12.118 18:21:19 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:12.118 18:21:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:12.118 18:21:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:12.118 18:21:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:12.118 18:21:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:12.118 18:21:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:12.118 18:21:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:12.118 18:21:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:12.118 18:21:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:12.118 18:21:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:12.118 18:21:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:12.118 18:21:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:12.118 18:21:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:12.118 18:21:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:12.118 18:21:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:12.118 18:21:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:12.118 18:21:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:12.118 18:21:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:12.118 18:21:19 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:12.118 18:21:19 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:12.118 18:21:19 -- spdk/autotest.sh@121 -- # grep -v p 00:05:12.118 18:21:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:12.118 18:21:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:12.118 18:21:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:12.118 18:21:19 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:12.118 18:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:12.118 No valid GPT data, bailing 00:05:12.118 18:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.118 18:21:19 -- scripts/common.sh@393 -- # pt= 00:05:12.118 18:21:19 -- scripts/common.sh@394 -- # return 1 00:05:12.118 18:21:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:12.118 1+0 records in 00:05:12.118 1+0 records out 00:05:12.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996749 s, 105 MB/s 00:05:12.118 18:21:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:12.118 18:21:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:12.118 18:21:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:12.118 18:21:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:12.118 18:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:12.118 No valid GPT data, bailing 00:05:12.118 18:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:12.118 18:21:19 -- scripts/common.sh@393 -- # pt= 00:05:12.118 18:21:19 -- scripts/common.sh@394 -- # return 1 00:05:12.118 18:21:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:12.118 1+0 records in 00:05:12.118 1+0 records out 00:05:12.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387756 s, 270 MB/s 00:05:12.118 18:21:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:12.118 18:21:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:12.118 18:21:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:12.118 18:21:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:12.118 18:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:12.376 No valid GPT data, bailing 00:05:12.376 18:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:12.376 18:21:19 -- scripts/common.sh@393 -- # pt= 00:05:12.376 18:21:19 -- scripts/common.sh@394 -- # return 1 00:05:12.376 18:21:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:12.376 1+0 records in 00:05:12.376 1+0 records out 00:05:12.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408743 s, 257 MB/s 00:05:12.376 18:21:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:12.376 18:21:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:12.376 18:21:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:12.376 18:21:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:12.376 18:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:12.376 No valid GPT data, bailing 00:05:12.376 18:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:12.376 18:21:19 -- scripts/common.sh@393 -- # pt= 00:05:12.376 18:21:19 -- scripts/common.sh@394 -- # return 1 00:05:12.376 18:21:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:12.376 1+0 records in 00:05:12.376 1+0 records out 00:05:12.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439445 s, 239 MB/s 00:05:12.376 18:21:19 -- spdk/autotest.sh@129 -- # sync 00:05:12.376 18:21:19 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:12.376 18:21:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:12.376 18:21:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.276 18:21:21 -- spdk/autotest.sh@135 -- # uname -s 00:05:14.276 18:21:21 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:14.276 18:21:21 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:14.276 18:21:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.276 18:21:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.276 18:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.276 ************************************ 00:05:14.276 START TEST setup.sh 00:05:14.276 ************************************ 00:05:14.276 18:21:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:14.276 * Looking for test storage... 00:05:14.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.276 18:21:21 -- setup/test-setup.sh@10 -- # uname -s 00:05:14.276 18:21:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:14.276 18:21:21 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.276 18:21:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.276 18:21:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.276 18:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.276 ************************************ 00:05:14.276 START TEST acl 00:05:14.276 ************************************ 00:05:14.276 18:21:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.535 * Looking for test storage... 00:05:14.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.535 18:21:21 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:14.535 18:21:21 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:14.535 18:21:21 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:14.535 18:21:21 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:14.535 18:21:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.535 18:21:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:14.535 18:21:21 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:14.535 18:21:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.535 18:21:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:14.535 18:21:21 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:14.535 18:21:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.535 18:21:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:14.535 18:21:21 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:14.535 18:21:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.535 18:21:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:14.535 18:21:21 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:14.535 18:21:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:14.535 18:21:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.535 18:21:21 -- setup/acl.sh@12 -- # devs=() 00:05:14.535 18:21:21 -- setup/acl.sh@12 -- # declare -a devs 00:05:14.535 18:21:21 -- setup/acl.sh@13 -- # drivers=() 00:05:14.535 18:21:21 -- setup/acl.sh@13 -- # declare -A drivers 00:05:14.535 18:21:21 -- setup/acl.sh@51 -- # setup reset 00:05:14.535 18:21:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.535 18:21:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.102 18:21:22 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:15.102 18:21:22 -- setup/acl.sh@16 -- # local dev driver 00:05:15.102 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.102 18:21:22 -- setup/acl.sh@15 -- # setup output status 00:05:15.102 18:21:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.102 18:21:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.361 Hugepages 00:05:15.361 node hugesize free / total 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # continue 00:05:15.361 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.361 00:05:15.361 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # continue 00:05:15.361 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:15.361 18:21:22 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:15.361 18:21:22 -- setup/acl.sh@20 -- # continue 00:05:15.361 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.361 18:21:22 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:15.361 18:21:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:15.361 18:21:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:15.361 18:21:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:15.361 18:21:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:15.361 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.620 18:21:22 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:15.620 18:21:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:15.620 18:21:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:15.620 18:21:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:15.620 18:21:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:15.620 18:21:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.620 18:21:22 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:15.620 18:21:22 -- setup/acl.sh@54 -- # run_test denied denied 00:05:15.620 18:21:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.620 18:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.620 18:21:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.620 ************************************ 00:05:15.620 START TEST denied 00:05:15.620 ************************************ 00:05:15.620 18:21:22 -- common/autotest_common.sh@1104 -- # denied 00:05:15.620 18:21:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:15.620 18:21:22 -- setup/acl.sh@38 -- # setup output config 00:05:15.620 18:21:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:15.620 18:21:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.620 18:21:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.557 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:16.557 18:21:23 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:16.557 18:21:23 -- setup/acl.sh@28 -- # local dev driver 00:05:16.557 18:21:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.557 18:21:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:16.557 18:21:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:16.557 18:21:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.557 18:21:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.557 18:21:23 -- setup/acl.sh@41 -- # setup reset 00:05:16.557 18:21:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.557 18:21:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.122 00:05:17.122 real 0m1.473s 00:05:17.122 user 0m0.579s 00:05:17.122 sys 0m0.834s 00:05:17.122 18:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.122 18:21:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.122 ************************************ 00:05:17.122 END TEST denied 00:05:17.122 ************************************ 00:05:17.122 18:21:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:17.122 18:21:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.123 18:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.123 18:21:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.123 ************************************ 00:05:17.123 START TEST allowed 00:05:17.123 ************************************ 00:05:17.123 18:21:24 -- common/autotest_common.sh@1104 -- # allowed 00:05:17.123 18:21:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.123 18:21:24 -- setup/acl.sh@45 -- # setup output config 00:05:17.123 18:21:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.123 18:21:24 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:17.123 18:21:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.689 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.689 18:21:25 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:17.689 18:21:25 -- setup/acl.sh@28 -- # local dev driver 00:05:17.689 18:21:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:17.689 18:21:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:17.689 18:21:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:17.689 18:21:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:17.689 18:21:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:17.689 18:21:25 -- setup/acl.sh@48 -- # setup reset 00:05:17.689 18:21:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.689 18:21:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.625 00:05:18.625 real 0m1.447s 00:05:18.625 user 0m0.630s 00:05:18.625 sys 0m0.829s 00:05:18.625 18:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.625 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.625 ************************************ 00:05:18.625 END TEST allowed 00:05:18.625 ************************************ 00:05:18.625 00:05:18.625 real 0m4.159s 00:05:18.625 user 0m1.767s 00:05:18.625 sys 0m2.371s 00:05:18.625 18:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.625 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.625 ************************************ 00:05:18.625 END TEST acl 00:05:18.625 ************************************ 00:05:18.625 18:21:25 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.625 18:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.625 18:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.625 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.625 ************************************ 00:05:18.625 START TEST hugepages 00:05:18.625 ************************************ 00:05:18.625 18:21:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.625 * Looking for test storage... 00:05:18.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.625 18:21:25 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:18.625 18:21:25 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:18.625 18:21:25 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:18.625 18:21:25 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:18.625 18:21:25 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:18.625 18:21:25 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:18.625 18:21:25 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:18.625 18:21:25 -- setup/common.sh@18 -- # local node= 00:05:18.625 18:21:25 -- setup/common.sh@19 -- # local var val 00:05:18.625 18:21:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.625 18:21:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.625 18:21:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.625 18:21:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.625 18:21:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.625 18:21:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4484068 kB' 'MemAvailable: 7386720 kB' 'Buffers: 2436 kB' 'Cached: 3103872 kB' 'SwapCached: 0 kB' 'Active: 475000 kB' 'Inactive: 2733592 kB' 'Active(anon): 112776 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 103864 kB' 'Mapped: 48956 kB' 'Shmem: 10492 kB' 'KReclaimable: 87548 kB' 'Slab: 167576 kB' 'SReclaimable: 87548 kB' 'SUnreclaim: 80028 kB' 'KernelStack: 6744 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 333088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.625 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.625 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # continue 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.626 18:21:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.626 18:21:25 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.626 18:21:25 -- setup/common.sh@33 -- # echo 2048 00:05:18.626 18:21:25 -- setup/common.sh@33 -- # return 0 00:05:18.626 18:21:25 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:18.626 18:21:25 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:18.626 18:21:25 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:18.626 18:21:25 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:18.626 18:21:25 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:18.626 18:21:25 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:18.626 18:21:25 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:18.626 18:21:25 -- setup/hugepages.sh@207 -- # get_nodes 00:05:18.626 18:21:25 -- setup/hugepages.sh@27 -- # local node 00:05:18.626 18:21:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.626 18:21:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:18.626 18:21:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.626 18:21:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.626 18:21:25 -- setup/hugepages.sh@208 -- # clear_hp 00:05:18.626 18:21:25 -- setup/hugepages.sh@37 -- # local node hp 00:05:18.626 18:21:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.626 18:21:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.626 18:21:25 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.626 18:21:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.626 18:21:25 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.626 18:21:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.626 18:21:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.626 18:21:25 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:18.626 18:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.626 18:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.626 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.626 ************************************ 00:05:18.626 START TEST default_setup 00:05:18.626 ************************************ 00:05:18.626 18:21:26 -- common/autotest_common.sh@1104 -- # default_setup 00:05:18.626 18:21:26 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:18.626 18:21:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.626 18:21:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.627 18:21:26 -- setup/hugepages.sh@51 -- # shift 00:05:18.627 18:21:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.627 18:21:26 -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.627 18:21:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.627 18:21:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.627 18:21:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.627 18:21:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.627 18:21:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.627 18:21:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.627 18:21:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.627 18:21:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.627 18:21:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.627 18:21:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.627 18:21:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.627 18:21:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:18.627 18:21:26 -- setup/hugepages.sh@73 -- # return 0 00:05:18.627 18:21:26 -- setup/hugepages.sh@137 -- # setup output 00:05:18.627 18:21:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.627 18:21:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.501 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.501 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.501 18:21:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:19.501 18:21:26 -- setup/hugepages.sh@89 -- # local node 00:05:19.501 18:21:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.501 18:21:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.501 18:21:26 -- setup/hugepages.sh@92 -- # local surp 00:05:19.501 18:21:26 -- setup/hugepages.sh@93 -- # local resv 00:05:19.501 18:21:26 -- setup/hugepages.sh@94 -- # local anon 00:05:19.501 18:21:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.501 18:21:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.501 18:21:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.501 18:21:26 -- setup/common.sh@18 -- # local node= 00:05:19.501 18:21:26 -- setup/common.sh@19 -- # local var val 00:05:19.501 18:21:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.501 18:21:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.501 18:21:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.501 18:21:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.501 18:21:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.501 18:21:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6574408 kB' 'MemAvailable: 9476860 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490728 kB' 'Inactive: 2733596 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119644 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'KernelStack: 6624 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.501 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.501 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.502 18:21:26 -- setup/common.sh@33 -- # echo 0 00:05:19.502 18:21:26 -- setup/common.sh@33 -- # return 0 00:05:19.502 18:21:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.502 18:21:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.502 18:21:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.502 18:21:26 -- setup/common.sh@18 -- # local node= 00:05:19.502 18:21:26 -- setup/common.sh@19 -- # local var val 00:05:19.502 18:21:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.502 18:21:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.502 18:21:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.502 18:21:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.502 18:21:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.502 18:21:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6574160 kB' 'MemAvailable: 9476612 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490744 kB' 'Inactive: 2733596 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'KernelStack: 6624 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.502 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.502 18:21:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.503 18:21:26 -- setup/common.sh@33 -- # echo 0 00:05:19.503 18:21:26 -- setup/common.sh@33 -- # return 0 00:05:19.503 18:21:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.503 18:21:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.503 18:21:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.503 18:21:26 -- setup/common.sh@18 -- # local node= 00:05:19.503 18:21:26 -- setup/common.sh@19 -- # local var val 00:05:19.503 18:21:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.503 18:21:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.503 18:21:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.503 18:21:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.503 18:21:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.503 18:21:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.503 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.503 18:21:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6574160 kB' 'MemAvailable: 9476612 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490920 kB' 'Inactive: 2733596 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119800 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:19.504 18:21:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.504 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.764 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.764 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.765 18:21:26 -- setup/common.sh@33 -- # echo 0 00:05:19.765 18:21:26 -- setup/common.sh@33 -- # return 0 00:05:19.765 nr_hugepages=1024 00:05:19.765 resv_hugepages=0 00:05:19.765 18:21:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.765 18:21:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.765 18:21:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.765 surplus_hugepages=0 00:05:19.765 18:21:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.765 anon_hugepages=0 00:05:19.765 18:21:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.765 18:21:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.765 18:21:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.765 18:21:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.765 18:21:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.765 18:21:26 -- setup/common.sh@18 -- # local node= 00:05:19.765 18:21:26 -- setup/common.sh@19 -- # local var val 00:05:19.765 18:21:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.765 18:21:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.765 18:21:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.765 18:21:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.765 18:21:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.765 18:21:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.765 18:21:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6573908 kB' 'MemAvailable: 9476360 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490700 kB' 'Inactive: 2733596 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119576 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.765 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.765 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:26 -- setup/common.sh@32 -- # continue 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.766 18:21:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.766 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.766 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.767 18:21:27 -- setup/common.sh@33 -- # echo 1024 00:05:19.767 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:19.767 18:21:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.767 18:21:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.767 18:21:27 -- setup/hugepages.sh@27 -- # local node 00:05:19.767 18:21:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.767 18:21:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.767 18:21:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.767 18:21:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.767 18:21:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.767 18:21:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.767 18:21:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.767 18:21:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.767 18:21:27 -- setup/common.sh@18 -- # local node=0 00:05:19.767 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:19.767 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.767 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.767 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.767 18:21:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.767 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.767 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575608 kB' 'MemUsed: 5666372 kB' 'SwapCached: 0 kB' 'Active: 490624 kB' 'Inactive: 2733600 kB' 'Active(anon): 128400 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3106300 kB' 'Mapped: 48760 kB' 'AnonPages: 119520 kB' 'Shmem: 10468 kB' 'KernelStack: 6624 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.767 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.767 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # continue 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.768 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.768 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.768 18:21:27 -- setup/common.sh@33 -- # echo 0 00:05:19.768 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:19.768 18:21:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.768 18:21:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.768 18:21:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.768 18:21:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.768 node0=1024 expecting 1024 00:05:19.768 ************************************ 00:05:19.768 END TEST default_setup 00:05:19.768 ************************************ 00:05:19.768 18:21:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.768 18:21:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.768 00:05:19.768 real 0m1.029s 00:05:19.768 user 0m0.460s 00:05:19.768 sys 0m0.484s 00:05:19.768 18:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.768 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:19.768 18:21:27 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:19.768 18:21:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.768 18:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.768 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:19.768 ************************************ 00:05:19.768 START TEST per_node_1G_alloc 00:05:19.768 ************************************ 00:05:19.768 18:21:27 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:19.768 18:21:27 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:19.768 18:21:27 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:19.768 18:21:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:19.768 18:21:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.768 18:21:27 -- setup/hugepages.sh@51 -- # shift 00:05:19.768 18:21:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.768 18:21:27 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.768 18:21:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.768 18:21:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:19.768 18:21:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.768 18:21:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.768 18:21:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.768 18:21:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:19.768 18:21:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.768 18:21:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.768 18:21:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.768 18:21:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.768 18:21:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.768 18:21:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:19.768 18:21:27 -- setup/hugepages.sh@73 -- # return 0 00:05:19.768 18:21:27 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:19.768 18:21:27 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:19.768 18:21:27 -- setup/hugepages.sh@146 -- # setup output 00:05:19.768 18:21:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.768 18:21:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.027 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.027 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.027 18:21:27 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:20.027 18:21:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:20.027 18:21:27 -- setup/hugepages.sh@89 -- # local node 00:05:20.027 18:21:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.027 18:21:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.027 18:21:27 -- setup/hugepages.sh@92 -- # local surp 00:05:20.027 18:21:27 -- setup/hugepages.sh@93 -- # local resv 00:05:20.027 18:21:27 -- setup/hugepages.sh@94 -- # local anon 00:05:20.027 18:21:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.027 18:21:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.027 18:21:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.027 18:21:27 -- setup/common.sh@18 -- # local node= 00:05:20.027 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:20.027 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.027 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.027 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.027 18:21:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.027 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.027 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.027 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7627816 kB' 'MemAvailable: 10530280 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490876 kB' 'Inactive: 2733608 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167112 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79972 kB' 'KernelStack: 6600 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.027 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.027 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.289 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.289 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.290 18:21:27 -- setup/common.sh@33 -- # echo 0 00:05:20.290 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:20.290 18:21:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.290 18:21:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.290 18:21:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.290 18:21:27 -- setup/common.sh@18 -- # local node= 00:05:20.290 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:20.290 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.290 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.290 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.290 18:21:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.290 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.290 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7627816 kB' 'MemAvailable: 10530280 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490780 kB' 'Inactive: 2733608 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167120 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79980 kB' 'KernelStack: 6656 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.290 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.290 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.291 18:21:27 -- setup/common.sh@33 -- # echo 0 00:05:20.291 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:20.291 18:21:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.291 18:21:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.291 18:21:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.291 18:21:27 -- setup/common.sh@18 -- # local node= 00:05:20.291 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:20.291 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.291 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.291 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.291 18:21:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.291 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.291 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7627816 kB' 'MemAvailable: 10530280 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490732 kB' 'Inactive: 2733608 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167112 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79972 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.291 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.291 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.292 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.292 18:21:27 -- setup/common.sh@33 -- # echo 0 00:05:20.292 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:20.292 nr_hugepages=512 00:05:20.292 resv_hugepages=0 00:05:20.292 18:21:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.292 18:21:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:20.292 18:21:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.292 surplus_hugepages=0 00:05:20.292 18:21:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.292 anon_hugepages=0 00:05:20.292 18:21:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.292 18:21:27 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.292 18:21:27 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:20.292 18:21:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.292 18:21:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.292 18:21:27 -- setup/common.sh@18 -- # local node= 00:05:20.292 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:20.292 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.292 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.292 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.292 18:21:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.292 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.292 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.292 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7627816 kB' 'MemAvailable: 10530280 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490724 kB' 'Inactive: 2733608 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119604 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167104 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79964 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.293 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.293 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.294 18:21:27 -- setup/common.sh@33 -- # echo 512 00:05:20.294 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:20.294 18:21:27 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.294 18:21:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.294 18:21:27 -- setup/hugepages.sh@27 -- # local node 00:05:20.294 18:21:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.294 18:21:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.294 18:21:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.294 18:21:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.294 18:21:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.294 18:21:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.294 18:21:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.294 18:21:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.294 18:21:27 -- setup/common.sh@18 -- # local node=0 00:05:20.294 18:21:27 -- setup/common.sh@19 -- # local var val 00:05:20.294 18:21:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.294 18:21:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.294 18:21:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.294 18:21:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.294 18:21:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.294 18:21:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7627816 kB' 'MemUsed: 4614164 kB' 'SwapCached: 0 kB' 'Active: 490772 kB' 'Inactive: 2733608 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3106300 kB' 'Mapped: 49020 kB' 'AnonPages: 119728 kB' 'Shmem: 10468 kB' 'KernelStack: 6656 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167100 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 18:21:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # continue 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 18:21:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 18:21:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.295 18:21:27 -- setup/common.sh@33 -- # echo 0 00:05:20.295 18:21:27 -- setup/common.sh@33 -- # return 0 00:05:20.295 18:21:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.295 18:21:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.295 18:21:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.295 node0=512 expecting 512 00:05:20.295 18:21:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:20.295 18:21:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:20.295 00:05:20.295 real 0m0.549s 00:05:20.295 user 0m0.284s 00:05:20.295 sys 0m0.270s 00:05:20.295 18:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.295 ************************************ 00:05:20.295 END TEST per_node_1G_alloc 00:05:20.295 ************************************ 00:05:20.295 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.295 18:21:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:20.295 18:21:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.295 18:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.295 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.295 ************************************ 00:05:20.295 START TEST even_2G_alloc 00:05:20.295 ************************************ 00:05:20.295 18:21:27 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:20.295 18:21:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:20.295 18:21:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.295 18:21:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.295 18:21:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.295 18:21:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.295 18:21:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.295 18:21:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.295 18:21:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.295 18:21:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.295 18:21:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.295 18:21:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:20.295 18:21:27 -- setup/hugepages.sh@83 -- # : 0 00:05:20.295 18:21:27 -- setup/hugepages.sh@84 -- # : 0 00:05:20.295 18:21:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.295 18:21:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:20.295 18:21:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:20.295 18:21:27 -- setup/hugepages.sh@153 -- # setup output 00:05:20.295 18:21:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.295 18:21:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.866 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.866 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.866 18:21:28 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:20.866 18:21:28 -- setup/hugepages.sh@89 -- # local node 00:05:20.866 18:21:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.866 18:21:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.866 18:21:28 -- setup/hugepages.sh@92 -- # local surp 00:05:20.866 18:21:28 -- setup/hugepages.sh@93 -- # local resv 00:05:20.866 18:21:28 -- setup/hugepages.sh@94 -- # local anon 00:05:20.866 18:21:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.866 18:21:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.866 18:21:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.867 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:20.867 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:20.867 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.867 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.867 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.867 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.867 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.867 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6583360 kB' 'MemAvailable: 9485824 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 491324 kB' 'Inactive: 2733608 kB' 'Active(anon): 129100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120236 kB' 'Mapped: 49124 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167132 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79992 kB' 'KernelStack: 6692 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.867 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.867 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.868 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:20.868 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:20.868 18:21:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.868 18:21:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.868 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.868 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:20.868 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:20.868 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.868 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.868 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.868 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.868 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.868 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6583612 kB' 'MemAvailable: 9486076 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490476 kB' 'Inactive: 2733608 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167112 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79972 kB' 'KernelStack: 6640 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.868 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.868 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.869 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:20.869 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:20.869 18:21:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.869 18:21:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.869 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.869 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:20.869 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:20.869 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.869 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.869 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.869 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.869 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.869 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6583612 kB' 'MemAvailable: 9486076 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490504 kB' 'Inactive: 2733608 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119728 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167112 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79972 kB' 'KernelStack: 6656 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.869 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.869 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.870 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.870 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.871 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:20.871 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:20.871 nr_hugepages=1024 00:05:20.871 resv_hugepages=0 00:05:20.871 surplus_hugepages=0 00:05:20.871 18:21:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.871 18:21:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.871 18:21:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.871 18:21:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.871 18:21:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.871 anon_hugepages=0 00:05:20.871 18:21:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.871 18:21:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.871 18:21:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.871 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.871 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:20.871 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:20.871 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.871 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.871 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.871 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.871 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.871 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6583364 kB' 'MemAvailable: 9485828 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490796 kB' 'Inactive: 2733608 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119724 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167108 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79968 kB' 'KernelStack: 6656 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.871 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.871 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.872 18:21:28 -- setup/common.sh@33 -- # echo 1024 00:05:20.872 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:20.872 18:21:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.872 18:21:28 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.872 18:21:28 -- setup/hugepages.sh@27 -- # local node 00:05:20.872 18:21:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.872 18:21:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.872 18:21:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.872 18:21:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.872 18:21:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.872 18:21:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.872 18:21:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.872 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.872 18:21:28 -- setup/common.sh@18 -- # local node=0 00:05:20.872 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:20.872 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.872 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.872 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.872 18:21:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.872 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.872 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6583364 kB' 'MemUsed: 5658616 kB' 'SwapCached: 0 kB' 'Active: 490576 kB' 'Inactive: 2733608 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3106300 kB' 'Mapped: 48760 kB' 'AnonPages: 119508 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167108 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.872 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.872 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # continue 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.873 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.873 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.873 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:20.873 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:20.873 18:21:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.873 18:21:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.873 18:21:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.873 18:21:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.873 node0=1024 expecting 1024 00:05:20.873 ************************************ 00:05:20.873 END TEST even_2G_alloc 00:05:20.873 ************************************ 00:05:20.873 18:21:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.873 18:21:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.873 00:05:20.873 real 0m0.567s 00:05:20.873 user 0m0.258s 00:05:20.873 sys 0m0.311s 00:05:20.873 18:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.873 18:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.873 18:21:28 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:20.873 18:21:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.873 18:21:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.873 18:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.132 ************************************ 00:05:21.132 START TEST odd_alloc 00:05:21.132 ************************************ 00:05:21.132 18:21:28 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:21.132 18:21:28 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:21.132 18:21:28 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:21.132 18:21:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:21.132 18:21:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.132 18:21:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.132 18:21:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.132 18:21:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:21.132 18:21:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.132 18:21:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.132 18:21:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.132 18:21:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:21.132 18:21:28 -- setup/hugepages.sh@83 -- # : 0 00:05:21.132 18:21:28 -- setup/hugepages.sh@84 -- # : 0 00:05:21.132 18:21:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.132 18:21:28 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:21.132 18:21:28 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:21.132 18:21:28 -- setup/hugepages.sh@160 -- # setup output 00:05:21.132 18:21:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.132 18:21:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.394 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.394 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.394 18:21:28 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:21.394 18:21:28 -- setup/hugepages.sh@89 -- # local node 00:05:21.394 18:21:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.394 18:21:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.394 18:21:28 -- setup/hugepages.sh@92 -- # local surp 00:05:21.394 18:21:28 -- setup/hugepages.sh@93 -- # local resv 00:05:21.394 18:21:28 -- setup/hugepages.sh@94 -- # local anon 00:05:21.394 18:21:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.394 18:21:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.394 18:21:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.394 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:21.394 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:21.394 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.394 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.394 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.394 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.394 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.394 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575532 kB' 'MemAvailable: 9477996 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 491320 kB' 'Inactive: 2733608 kB' 'Active(anon): 129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120284 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167116 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79976 kB' 'KernelStack: 6676 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.394 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.394 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.395 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:21.395 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:21.395 18:21:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.395 18:21:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.395 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.395 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:21.395 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:21.395 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.395 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.395 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.395 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.395 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.395 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575532 kB' 'MemAvailable: 9477996 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490732 kB' 'Inactive: 2733608 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167096 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79956 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.395 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.395 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.396 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:21.396 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:21.396 18:21:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.396 18:21:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.396 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.396 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:21.396 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:21.396 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.396 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.396 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.396 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.396 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.396 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575796 kB' 'MemAvailable: 9478260 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490844 kB' 'Inactive: 2733608 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167092 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79952 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.396 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.396 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.397 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.397 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:21.397 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:21.397 nr_hugepages=1025 00:05:21.397 resv_hugepages=0 00:05:21.397 surplus_hugepages=0 00:05:21.397 anon_hugepages=0 00:05:21.397 18:21:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.397 18:21:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:21.397 18:21:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.397 18:21:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.397 18:21:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.397 18:21:28 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:21.397 18:21:28 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:21.397 18:21:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.397 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.397 18:21:28 -- setup/common.sh@18 -- # local node= 00:05:21.397 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:21.397 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.397 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.397 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.397 18:21:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.397 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.397 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.397 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575796 kB' 'MemAvailable: 9478260 kB' 'Buffers: 2436 kB' 'Cached: 3103864 kB' 'SwapCached: 0 kB' 'Active: 490736 kB' 'Inactive: 2733608 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167092 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79952 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.398 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.398 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.399 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.399 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.658 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.658 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.658 18:21:28 -- setup/common.sh@33 -- # echo 1025 00:05:21.658 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:21.658 18:21:28 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:21.658 18:21:28 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.658 18:21:28 -- setup/hugepages.sh@27 -- # local node 00:05:21.658 18:21:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.658 18:21:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:21.658 18:21:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.659 18:21:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.659 18:21:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.659 18:21:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.659 18:21:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.659 18:21:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.659 18:21:28 -- setup/common.sh@18 -- # local node=0 00:05:21.659 18:21:28 -- setup/common.sh@19 -- # local var val 00:05:21.659 18:21:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.659 18:21:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.659 18:21:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.659 18:21:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.659 18:21:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.659 18:21:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575796 kB' 'MemUsed: 5666184 kB' 'SwapCached: 0 kB' 'Active: 490736 kB' 'Inactive: 2733608 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3106300 kB' 'Mapped: 48760 kB' 'AnonPages: 119644 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167088 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # continue 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.659 18:21:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.659 18:21:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.660 18:21:28 -- setup/common.sh@33 -- # echo 0 00:05:21.660 18:21:28 -- setup/common.sh@33 -- # return 0 00:05:21.660 node0=1025 expecting 1025 00:05:21.660 ************************************ 00:05:21.660 END TEST odd_alloc 00:05:21.660 ************************************ 00:05:21.660 18:21:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.660 18:21:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.660 18:21:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.660 18:21:28 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:21.660 18:21:28 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:21.660 00:05:21.660 real 0m0.562s 00:05:21.660 user 0m0.272s 00:05:21.660 sys 0m0.286s 00:05:21.660 18:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.660 18:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.660 18:21:28 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:21.660 18:21:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.660 18:21:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.660 18:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.660 ************************************ 00:05:21.660 START TEST custom_alloc 00:05:21.660 ************************************ 00:05:21.660 18:21:28 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:21.660 18:21:28 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:21.660 18:21:28 -- setup/hugepages.sh@169 -- # local node 00:05:21.660 18:21:28 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:21.660 18:21:28 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:21.660 18:21:28 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:21.660 18:21:28 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:21.660 18:21:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:21.660 18:21:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.660 18:21:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.660 18:21:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.660 18:21:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.660 18:21:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.660 18:21:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.660 18:21:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@83 -- # : 0 00:05:21.660 18:21:28 -- setup/hugepages.sh@84 -- # : 0 00:05:21.660 18:21:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:21.660 18:21:28 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:21.660 18:21:28 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:21.660 18:21:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.660 18:21:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.660 18:21:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.660 18:21:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.660 18:21:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.660 18:21:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:21.660 18:21:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:21.660 18:21:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:21.660 18:21:28 -- setup/hugepages.sh@78 -- # return 0 00:05:21.660 18:21:28 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:21.660 18:21:28 -- setup/hugepages.sh@187 -- # setup output 00:05:21.660 18:21:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.660 18:21:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.919 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.919 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.919 18:21:29 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:21.919 18:21:29 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:21.919 18:21:29 -- setup/hugepages.sh@89 -- # local node 00:05:21.919 18:21:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.919 18:21:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.919 18:21:29 -- setup/hugepages.sh@92 -- # local surp 00:05:21.919 18:21:29 -- setup/hugepages.sh@93 -- # local resv 00:05:21.919 18:21:29 -- setup/hugepages.sh@94 -- # local anon 00:05:21.919 18:21:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.919 18:21:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.919 18:21:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.919 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:21.919 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:21.920 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.920 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.920 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.920 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.920 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.920 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7628636 kB' 'MemAvailable: 10531104 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 491068 kB' 'Inactive: 2733612 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 48912 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167088 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79948 kB' 'KernelStack: 6648 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.920 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.920 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.921 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:21.921 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:21.921 18:21:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.921 18:21:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.921 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.921 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:21.921 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:21.921 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.921 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.921 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.921 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.921 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.921 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7628636 kB' 'MemAvailable: 10531104 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490856 kB' 'Inactive: 2733612 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167088 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79948 kB' 'KernelStack: 6656 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.921 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.921 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.922 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.922 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.922 18:21:29 -- setup/common.sh@32 -- # continue 00:05:21.922 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.922 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.182 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.182 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.183 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:22.183 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.183 18:21:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.183 18:21:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.183 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.183 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:22.183 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.183 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.183 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.183 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.183 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.183 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.183 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7628636 kB' 'MemAvailable: 10531104 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490596 kB' 'Inactive: 2733612 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119480 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167084 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79944 kB' 'KernelStack: 6640 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.183 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.183 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.184 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:22.184 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.184 18:21:29 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.184 18:21:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:22.184 nr_hugepages=512 00:05:22.184 resv_hugepages=0 00:05:22.184 surplus_hugepages=0 00:05:22.184 anon_hugepages=0 00:05:22.184 18:21:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.184 18:21:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.184 18:21:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.184 18:21:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.184 18:21:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:22.184 18:21:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.184 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.184 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:22.184 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.184 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.184 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.184 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.184 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.184 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.184 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7628636 kB' 'MemAvailable: 10531104 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490852 kB' 'Inactive: 2733612 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167084 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79944 kB' 'KernelStack: 6640 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.184 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.184 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.185 18:21:29 -- setup/common.sh@33 -- # echo 512 00:05:22.185 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.185 18:21:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.185 18:21:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.185 18:21:29 -- setup/hugepages.sh@27 -- # local node 00:05:22.185 18:21:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.185 18:21:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:22.185 18:21:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.185 18:21:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.185 18:21:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.185 18:21:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.185 18:21:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.185 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.185 18:21:29 -- setup/common.sh@18 -- # local node=0 00:05:22.185 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.185 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.185 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.185 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.185 18:21:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.185 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.185 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7628636 kB' 'MemUsed: 4613344 kB' 'SwapCached: 0 kB' 'Active: 490848 kB' 'Inactive: 2733612 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3106304 kB' 'Mapped: 48784 kB' 'AnonPages: 119728 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167084 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 79944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.185 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.185 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.186 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.186 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.186 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:22.186 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.186 node0=512 expecting 512 00:05:22.186 18:21:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.186 18:21:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.186 18:21:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.186 18:21:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.186 18:21:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:22.186 18:21:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:22.186 ************************************ 00:05:22.186 END TEST custom_alloc 00:05:22.186 ************************************ 00:05:22.186 00:05:22.186 real 0m0.560s 00:05:22.186 user 0m0.267s 00:05:22.186 sys 0m0.298s 00:05:22.186 18:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.186 18:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 18:21:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:22.186 18:21:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.186 18:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.186 18:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 ************************************ 00:05:22.186 START TEST no_shrink_alloc 00:05:22.186 ************************************ 00:05:22.186 18:21:29 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:22.186 18:21:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:22.186 18:21:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.187 18:21:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:22.187 18:21:29 -- setup/hugepages.sh@51 -- # shift 00:05:22.187 18:21:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:22.187 18:21:29 -- setup/hugepages.sh@52 -- # local node_ids 00:05:22.187 18:21:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.187 18:21:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.187 18:21:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:22.187 18:21:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:22.187 18:21:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.187 18:21:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.187 18:21:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.187 18:21:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.187 18:21:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.187 18:21:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:22.187 18:21:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:22.187 18:21:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:22.187 18:21:29 -- setup/hugepages.sh@73 -- # return 0 00:05:22.187 18:21:29 -- setup/hugepages.sh@198 -- # setup output 00:05:22.187 18:21:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.187 18:21:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.706 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.706 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.706 18:21:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:22.706 18:21:29 -- setup/hugepages.sh@89 -- # local node 00:05:22.706 18:21:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.706 18:21:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.706 18:21:29 -- setup/hugepages.sh@92 -- # local surp 00:05:22.706 18:21:29 -- setup/hugepages.sh@93 -- # local resv 00:05:22.706 18:21:29 -- setup/hugepages.sh@94 -- # local anon 00:05:22.706 18:21:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.706 18:21:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.706 18:21:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.706 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:22.706 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.706 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.706 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.706 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.706 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.706 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.706 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6577328 kB' 'MemAvailable: 9479796 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 491076 kB' 'Inactive: 2733612 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 48852 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167152 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80012 kB' 'KernelStack: 6648 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.706 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.706 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.707 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:22.707 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.707 18:21:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.707 18:21:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.707 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.707 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:22.707 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.707 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.707 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.707 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.707 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.707 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.707 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6577328 kB' 'MemAvailable: 9479796 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490872 kB' 'Inactive: 2733612 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119800 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167204 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80064 kB' 'KernelStack: 6656 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.707 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.707 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.708 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.708 18:21:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.709 18:21:29 -- setup/common.sh@33 -- # echo 0 00:05:22.709 18:21:29 -- setup/common.sh@33 -- # return 0 00:05:22.709 18:21:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.709 18:21:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.709 18:21:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.709 18:21:29 -- setup/common.sh@18 -- # local node= 00:05:22.709 18:21:29 -- setup/common.sh@19 -- # local var val 00:05:22.709 18:21:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.709 18:21:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.709 18:21:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.709 18:21:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.709 18:21:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.709 18:21:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6577328 kB' 'MemAvailable: 9479796 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490804 kB' 'Inactive: 2733612 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167200 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80060 kB' 'KernelStack: 6640 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:29 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.709 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.709 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.710 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:22.710 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:22.710 18:21:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.710 18:21:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.710 nr_hugepages=1024 00:05:22.710 18:21:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.710 resv_hugepages=0 00:05:22.710 surplus_hugepages=0 00:05:22.710 anon_hugepages=0 00:05:22.710 18:21:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.710 18:21:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.710 18:21:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.710 18:21:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.710 18:21:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.710 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.710 18:21:30 -- setup/common.sh@18 -- # local node= 00:05:22.710 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:22.710 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.710 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.710 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.710 18:21:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.710 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.710 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6577328 kB' 'MemAvailable: 9479796 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490800 kB' 'Inactive: 2733612 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167200 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80060 kB' 'KernelStack: 6640 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.710 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.710 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.711 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.711 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.712 18:21:30 -- setup/common.sh@33 -- # echo 1024 00:05:22.712 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:22.712 18:21:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.712 18:21:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.712 18:21:30 -- setup/hugepages.sh@27 -- # local node 00:05:22.712 18:21:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.712 18:21:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.712 18:21:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.712 18:21:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.712 18:21:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.712 18:21:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.712 18:21:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.712 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.712 18:21:30 -- setup/common.sh@18 -- # local node=0 00:05:22.712 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:22.712 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.712 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.712 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.712 18:21:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.712 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.712 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6577328 kB' 'MemUsed: 5664652 kB' 'SwapCached: 0 kB' 'Active: 490864 kB' 'Inactive: 2733612 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3106304 kB' 'Mapped: 48784 kB' 'AnonPages: 119812 kB' 'Shmem: 10468 kB' 'KernelStack: 6656 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167200 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.712 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.712 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # continue 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.713 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.713 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.713 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:22.713 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:22.713 18:21:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.713 18:21:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.713 18:21:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.713 node0=1024 expecting 1024 00:05:22.713 18:21:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.713 18:21:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.713 18:21:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.713 18:21:30 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:22.713 18:21:30 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:22.713 18:21:30 -- setup/hugepages.sh@202 -- # setup output 00:05:22.713 18:21:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.713 18:21:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.281 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.281 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.281 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:23.281 18:21:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:23.281 18:21:30 -- setup/hugepages.sh@89 -- # local node 00:05:23.281 18:21:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.281 18:21:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.281 18:21:30 -- setup/hugepages.sh@92 -- # local surp 00:05:23.281 18:21:30 -- setup/hugepages.sh@93 -- # local resv 00:05:23.281 18:21:30 -- setup/hugepages.sh@94 -- # local anon 00:05:23.281 18:21:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.281 18:21:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.281 18:21:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.281 18:21:30 -- setup/common.sh@18 -- # local node= 00:05:23.281 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:23.281 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.281 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.281 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.281 18:21:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.282 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.282 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575312 kB' 'MemAvailable: 9477780 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 491396 kB' 'Inactive: 2733612 kB' 'Active(anon): 129172 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120324 kB' 'Mapped: 48948 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167180 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80040 kB' 'KernelStack: 6696 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.282 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.282 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.283 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:23.283 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:23.283 18:21:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.283 18:21:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.283 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.283 18:21:30 -- setup/common.sh@18 -- # local node= 00:05:23.283 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:23.283 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.283 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.283 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.283 18:21:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.283 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.283 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575312 kB' 'MemAvailable: 9477780 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490912 kB' 'Inactive: 2733612 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167172 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80032 kB' 'KernelStack: 6656 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.283 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.283 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.284 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:23.284 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:23.284 18:21:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.284 18:21:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.284 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.284 18:21:30 -- setup/common.sh@18 -- # local node= 00:05:23.284 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:23.284 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.284 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.284 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.284 18:21:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.284 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.284 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575312 kB' 'MemAvailable: 9477780 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490660 kB' 'Inactive: 2733612 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119552 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167168 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80028 kB' 'KernelStack: 6656 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.284 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.284 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.285 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.285 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:23.285 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:23.285 nr_hugepages=1024 00:05:23.285 resv_hugepages=0 00:05:23.285 surplus_hugepages=0 00:05:23.285 anon_hugepages=0 00:05:23.285 18:21:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.285 18:21:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.285 18:21:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.285 18:21:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.285 18:21:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.285 18:21:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.285 18:21:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.285 18:21:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.285 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.285 18:21:30 -- setup/common.sh@18 -- # local node= 00:05:23.285 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:23.285 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.285 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.285 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.285 18:21:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.285 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.285 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.285 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.285 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575312 kB' 'MemAvailable: 9477780 kB' 'Buffers: 2436 kB' 'Cached: 3103868 kB' 'SwapCached: 0 kB' 'Active: 490892 kB' 'Inactive: 2733612 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 87140 kB' 'Slab: 167168 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80028 kB' 'KernelStack: 6656 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.286 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.286 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.287 18:21:30 -- setup/common.sh@33 -- # echo 1024 00:05:23.287 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:23.287 18:21:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.287 18:21:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.287 18:21:30 -- setup/hugepages.sh@27 -- # local node 00:05:23.287 18:21:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.287 18:21:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.287 18:21:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.287 18:21:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.287 18:21:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.287 18:21:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.287 18:21:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.287 18:21:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.287 18:21:30 -- setup/common.sh@18 -- # local node=0 00:05:23.287 18:21:30 -- setup/common.sh@19 -- # local var val 00:05:23.287 18:21:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.287 18:21:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.287 18:21:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.287 18:21:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.287 18:21:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.287 18:21:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6575312 kB' 'MemUsed: 5666668 kB' 'SwapCached: 0 kB' 'Active: 490624 kB' 'Inactive: 2733612 kB' 'Active(anon): 128400 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2733612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3106304 kB' 'Mapped: 48760 kB' 'AnonPages: 119548 kB' 'Shmem: 10468 kB' 'KernelStack: 6656 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87140 kB' 'Slab: 167168 kB' 'SReclaimable: 87140 kB' 'SUnreclaim: 80028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.287 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.287 18:21:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # continue 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.288 18:21:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.288 18:21:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.288 18:21:30 -- setup/common.sh@33 -- # echo 0 00:05:23.288 18:21:30 -- setup/common.sh@33 -- # return 0 00:05:23.288 18:21:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.288 18:21:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.288 18:21:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.288 node0=1024 expecting 1024 00:05:23.288 ************************************ 00:05:23.288 END TEST no_shrink_alloc 00:05:23.288 ************************************ 00:05:23.288 18:21:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.288 18:21:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.288 18:21:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.288 00:05:23.288 real 0m1.133s 00:05:23.288 user 0m0.557s 00:05:23.288 sys 0m0.585s 00:05:23.288 18:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.288 18:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.288 18:21:30 -- setup/hugepages.sh@217 -- # clear_hp 00:05:23.288 18:21:30 -- setup/hugepages.sh@37 -- # local node hp 00:05:23.288 18:21:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:23.288 18:21:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:23.288 18:21:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:23.288 18:21:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:23.288 18:21:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:23.288 18:21:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:23.288 18:21:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:23.288 00:05:23.288 real 0m4.832s 00:05:23.288 user 0m2.273s 00:05:23.288 sys 0m2.469s 00:05:23.288 18:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.288 ************************************ 00:05:23.288 END TEST hugepages 00:05:23.288 ************************************ 00:05:23.288 18:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.547 18:21:30 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:23.547 18:21:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.547 18:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.547 18:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.547 ************************************ 00:05:23.547 START TEST driver 00:05:23.547 ************************************ 00:05:23.547 18:21:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:23.547 * Looking for test storage... 00:05:23.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:23.547 18:21:30 -- setup/driver.sh@68 -- # setup reset 00:05:23.547 18:21:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.547 18:21:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.149 18:21:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:24.149 18:21:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.149 18:21:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.149 18:21:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.149 ************************************ 00:05:24.149 START TEST guess_driver 00:05:24.149 ************************************ 00:05:24.149 18:21:31 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:24.149 18:21:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:24.149 18:21:31 -- setup/driver.sh@47 -- # local fail=0 00:05:24.149 18:21:31 -- setup/driver.sh@49 -- # pick_driver 00:05:24.149 18:21:31 -- setup/driver.sh@36 -- # vfio 00:05:24.149 18:21:31 -- setup/driver.sh@21 -- # local iommu_grups 00:05:24.149 18:21:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:24.149 18:21:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:24.149 18:21:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:24.149 18:21:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:24.149 18:21:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:24.149 18:21:31 -- setup/driver.sh@32 -- # return 1 00:05:24.149 18:21:31 -- setup/driver.sh@38 -- # uio 00:05:24.149 18:21:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:24.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:24.149 18:21:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:24.149 Looking for driver=uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:24.149 18:21:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:24.149 18:21:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:24.149 18:21:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.149 18:21:31 -- setup/driver.sh@45 -- # setup output config 00:05:24.149 18:21:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.149 18:21:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.716 18:21:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:24.716 18:21:32 -- setup/driver.sh@58 -- # continue 00:05:24.716 18:21:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.716 18:21:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.716 18:21:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:24.716 18:21:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.974 18:21:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.974 18:21:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:24.974 18:21:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.974 18:21:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:24.974 18:21:32 -- setup/driver.sh@65 -- # setup reset 00:05:24.974 18:21:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.974 18:21:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.540 00:05:25.540 real 0m1.437s 00:05:25.540 user 0m0.550s 00:05:25.540 sys 0m0.899s 00:05:25.540 18:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.541 18:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 ************************************ 00:05:25.541 END TEST guess_driver 00:05:25.541 ************************************ 00:05:25.541 ************************************ 00:05:25.541 END TEST driver 00:05:25.541 ************************************ 00:05:25.541 00:05:25.541 real 0m2.103s 00:05:25.541 user 0m0.782s 00:05:25.541 sys 0m1.366s 00:05:25.541 18:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.541 18:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 18:21:32 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:25.541 18:21:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.541 18:21:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.541 18:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 ************************************ 00:05:25.541 START TEST devices 00:05:25.541 ************************************ 00:05:25.541 18:21:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:25.541 * Looking for test storage... 00:05:25.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.799 18:21:32 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:25.799 18:21:32 -- setup/devices.sh@192 -- # setup reset 00:05:25.799 18:21:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.799 18:21:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.364 18:21:33 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:26.364 18:21:33 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:26.364 18:21:33 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:26.364 18:21:33 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:26.364 18:21:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:26.364 18:21:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:26.364 18:21:33 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:26.364 18:21:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:26.364 18:21:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:26.364 18:21:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:26.364 18:21:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:26.364 18:21:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:26.364 18:21:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:26.364 18:21:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:26.364 18:21:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:26.364 18:21:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:26.364 18:21:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:26.364 18:21:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:26.364 18:21:33 -- setup/devices.sh@196 -- # blocks=() 00:05:26.364 18:21:33 -- setup/devices.sh@196 -- # declare -a blocks 00:05:26.364 18:21:33 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:26.364 18:21:33 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:26.364 18:21:33 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:26.364 18:21:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:26.364 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:26.364 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:26.364 18:21:33 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:26.364 18:21:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:26.364 18:21:33 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:26.364 18:21:33 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:26.364 18:21:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:26.364 No valid GPT data, bailing 00:05:26.364 18:21:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:26.364 18:21:33 -- scripts/common.sh@393 -- # pt= 00:05:26.364 18:21:33 -- scripts/common.sh@394 -- # return 1 00:05:26.364 18:21:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:26.364 18:21:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:26.364 18:21:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:26.622 18:21:33 -- setup/common.sh@80 -- # echo 5368709120 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:26.622 18:21:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:26.622 18:21:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:26.622 18:21:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:26.622 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:26.622 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:26.622 18:21:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:26.622 18:21:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:26.622 18:21:33 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:26.622 18:21:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:26.622 No valid GPT data, bailing 00:05:26.622 18:21:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:26.622 18:21:33 -- scripts/common.sh@393 -- # pt= 00:05:26.622 18:21:33 -- scripts/common.sh@394 -- # return 1 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:26.622 18:21:33 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:26.622 18:21:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:26.622 18:21:33 -- setup/common.sh@80 -- # echo 4294967296 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:26.622 18:21:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:26.622 18:21:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:26.622 18:21:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:26.622 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:26.622 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:26.622 18:21:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:26.622 18:21:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:26.622 18:21:33 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:26.622 18:21:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:26.622 No valid GPT data, bailing 00:05:26.622 18:21:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:26.622 18:21:33 -- scripts/common.sh@393 -- # pt= 00:05:26.622 18:21:33 -- scripts/common.sh@394 -- # return 1 00:05:26.622 18:21:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:26.622 18:21:33 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:26.623 18:21:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:26.623 18:21:33 -- setup/common.sh@80 -- # echo 4294967296 00:05:26.623 18:21:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:26.623 18:21:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:26.623 18:21:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:26.623 18:21:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:26.623 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:26.623 18:21:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:26.623 18:21:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:26.623 18:21:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:26.623 18:21:33 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:26.623 18:21:33 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:26.623 18:21:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:26.623 No valid GPT data, bailing 00:05:26.623 18:21:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:26.623 18:21:34 -- scripts/common.sh@393 -- # pt= 00:05:26.623 18:21:34 -- scripts/common.sh@394 -- # return 1 00:05:26.623 18:21:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:26.623 18:21:34 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:26.623 18:21:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:26.623 18:21:34 -- setup/common.sh@80 -- # echo 4294967296 00:05:26.623 18:21:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:26.623 18:21:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:26.623 18:21:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:26.623 18:21:34 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:26.623 18:21:34 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:26.623 18:21:34 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:26.623 18:21:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.623 18:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.623 18:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.623 ************************************ 00:05:26.623 START TEST nvme_mount 00:05:26.623 ************************************ 00:05:26.623 18:21:34 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:26.623 18:21:34 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:26.623 18:21:34 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:26.623 18:21:34 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.623 18:21:34 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.623 18:21:34 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:26.623 18:21:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.623 18:21:34 -- setup/common.sh@40 -- # local part_no=1 00:05:26.623 18:21:34 -- setup/common.sh@41 -- # local size=1073741824 00:05:26.623 18:21:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.623 18:21:34 -- setup/common.sh@44 -- # parts=() 00:05:26.623 18:21:34 -- setup/common.sh@44 -- # local parts 00:05:26.623 18:21:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.623 18:21:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.623 18:21:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.623 18:21:34 -- setup/common.sh@46 -- # (( part++ )) 00:05:26.623 18:21:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.623 18:21:34 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:26.623 18:21:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.623 18:21:34 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:27.997 Creating new GPT entries in memory. 00:05:27.997 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.997 other utilities. 00:05:27.997 18:21:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.997 18:21:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.997 18:21:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.997 18:21:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.997 18:21:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.931 Creating new GPT entries in memory. 00:05:28.931 The operation has completed successfully. 00:05:28.931 18:21:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:28.931 18:21:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.931 18:21:36 -- setup/common.sh@62 -- # wait 65946 00:05:28.931 18:21:36 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.931 18:21:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:28.931 18:21:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.931 18:21:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:28.931 18:21:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:28.931 18:21:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.931 18:21:36 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.931 18:21:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:28.931 18:21:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:28.931 18:21:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.931 18:21:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.931 18:21:36 -- setup/devices.sh@53 -- # local found=0 00:05:28.931 18:21:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.931 18:21:36 -- setup/devices.sh@56 -- # : 00:05:28.931 18:21:36 -- setup/devices.sh@59 -- # local pci status 00:05:28.931 18:21:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.931 18:21:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:28.931 18:21:36 -- setup/devices.sh@47 -- # setup output config 00:05:28.931 18:21:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.931 18:21:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.931 18:21:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.931 18:21:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:28.931 18:21:36 -- setup/devices.sh@63 -- # found=1 00:05:28.931 18:21:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.931 18:21:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.931 18:21:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.507 18:21:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.507 18:21:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.507 18:21:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.507 18:21:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.507 18:21:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.507 18:21:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:29.507 18:21:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.507 18:21:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.507 18:21:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.507 18:21:36 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:29.507 18:21:36 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.507 18:21:36 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.507 18:21:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.507 18:21:36 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:29.507 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.507 18:21:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.507 18:21:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.765 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.765 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.765 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.765 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.765 18:21:37 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:29.765 18:21:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:29.765 18:21:37 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.765 18:21:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:29.765 18:21:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:29.765 18:21:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.765 18:21:37 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.765 18:21:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.765 18:21:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:29.765 18:21:37 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.765 18:21:37 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.765 18:21:37 -- setup/devices.sh@53 -- # local found=0 00:05:29.765 18:21:37 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.765 18:21:37 -- setup/devices.sh@56 -- # : 00:05:29.765 18:21:37 -- setup/devices.sh@59 -- # local pci status 00:05:29.765 18:21:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.765 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.765 18:21:37 -- setup/devices.sh@47 -- # setup output config 00:05:29.765 18:21:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.765 18:21:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.023 18:21:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.023 18:21:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:30.023 18:21:37 -- setup/devices.sh@63 -- # found=1 00:05:30.023 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.023 18:21:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.023 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.280 18:21:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.280 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.538 18:21:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.538 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.538 18:21:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.538 18:21:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:30.538 18:21:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.538 18:21:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.538 18:21:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.538 18:21:37 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.538 18:21:37 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:30.538 18:21:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:30.538 18:21:37 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:30.538 18:21:37 -- setup/devices.sh@50 -- # local mount_point= 00:05:30.538 18:21:37 -- setup/devices.sh@51 -- # local test_file= 00:05:30.538 18:21:37 -- setup/devices.sh@53 -- # local found=0 00:05:30.538 18:21:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.538 18:21:37 -- setup/devices.sh@59 -- # local pci status 00:05:30.538 18:21:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.538 18:21:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:30.538 18:21:37 -- setup/devices.sh@47 -- # setup output config 00:05:30.538 18:21:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.538 18:21:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.795 18:21:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.795 18:21:38 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:30.795 18:21:38 -- setup/devices.sh@63 -- # found=1 00:05:30.795 18:21:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.795 18:21:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.795 18:21:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.053 18:21:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.053 18:21:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.053 18:21:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.053 18:21:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.311 18:21:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.311 18:21:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.311 18:21:38 -- setup/devices.sh@68 -- # return 0 00:05:31.311 18:21:38 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:31.311 18:21:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.311 18:21:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.311 18:21:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.311 18:21:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.311 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.311 00:05:31.311 real 0m4.500s 00:05:31.311 user 0m1.011s 00:05:31.311 sys 0m1.149s 00:05:31.311 18:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.311 18:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.311 ************************************ 00:05:31.311 END TEST nvme_mount 00:05:31.311 ************************************ 00:05:31.311 18:21:38 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:31.311 18:21:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.311 18:21:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.311 18:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.311 ************************************ 00:05:31.311 START TEST dm_mount 00:05:31.311 ************************************ 00:05:31.311 18:21:38 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:31.311 18:21:38 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:31.311 18:21:38 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:31.311 18:21:38 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:31.311 18:21:38 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:31.311 18:21:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:31.311 18:21:38 -- setup/common.sh@40 -- # local part_no=2 00:05:31.311 18:21:38 -- setup/common.sh@41 -- # local size=1073741824 00:05:31.311 18:21:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:31.311 18:21:38 -- setup/common.sh@44 -- # parts=() 00:05:31.311 18:21:38 -- setup/common.sh@44 -- # local parts 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.311 18:21:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part++ )) 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.311 18:21:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part++ )) 00:05:31.311 18:21:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.311 18:21:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:31.311 18:21:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:31.311 18:21:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:32.244 Creating new GPT entries in memory. 00:05:32.244 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:32.244 other utilities. 00:05:32.244 18:21:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:32.244 18:21:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.244 18:21:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.244 18:21:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.244 18:21:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:33.620 Creating new GPT entries in memory. 00:05:33.620 The operation has completed successfully. 00:05:33.620 18:21:40 -- setup/common.sh@57 -- # (( part++ )) 00:05:33.620 18:21:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.620 18:21:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.620 18:21:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.620 18:21:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:34.598 The operation has completed successfully. 00:05:34.598 18:21:41 -- setup/common.sh@57 -- # (( part++ )) 00:05:34.598 18:21:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.598 18:21:41 -- setup/common.sh@62 -- # wait 66406 00:05:34.599 18:21:41 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:34.599 18:21:41 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.599 18:21:41 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:34.599 18:21:41 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:34.599 18:21:41 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:34.599 18:21:41 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.599 18:21:41 -- setup/devices.sh@161 -- # break 00:05:34.599 18:21:41 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.599 18:21:41 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:34.599 18:21:41 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:34.599 18:21:41 -- setup/devices.sh@166 -- # dm=dm-0 00:05:34.599 18:21:41 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:34.599 18:21:41 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:34.599 18:21:41 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.599 18:21:41 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:34.599 18:21:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.599 18:21:41 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.599 18:21:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:34.599 18:21:41 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.599 18:21:41 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:34.599 18:21:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:34.599 18:21:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:34.599 18:21:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.599 18:21:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:34.599 18:21:41 -- setup/devices.sh@53 -- # local found=0 00:05:34.599 18:21:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:34.599 18:21:41 -- setup/devices.sh@56 -- # : 00:05:34.599 18:21:41 -- setup/devices.sh@59 -- # local pci status 00:05:34.599 18:21:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:34.599 18:21:41 -- setup/devices.sh@47 -- # setup output config 00:05:34.599 18:21:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.599 18:21:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.599 18:21:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.599 18:21:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.599 18:21:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:34.599 18:21:41 -- setup/devices.sh@63 -- # found=1 00:05:34.599 18:21:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.599 18:21:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.599 18:21:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.857 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.857 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.115 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.115 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.115 18:21:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.115 18:21:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:35.115 18:21:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.115 18:21:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:35.115 18:21:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.115 18:21:42 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.115 18:21:42 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:35.115 18:21:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:35.115 18:21:42 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:35.115 18:21:42 -- setup/devices.sh@50 -- # local mount_point= 00:05:35.115 18:21:42 -- setup/devices.sh@51 -- # local test_file= 00:05:35.115 18:21:42 -- setup/devices.sh@53 -- # local found=0 00:05:35.115 18:21:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.115 18:21:42 -- setup/devices.sh@59 -- # local pci status 00:05:35.115 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.115 18:21:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:35.115 18:21:42 -- setup/devices.sh@47 -- # setup output config 00:05:35.115 18:21:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.115 18:21:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.373 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.373 18:21:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:35.373 18:21:42 -- setup/devices.sh@63 -- # found=1 00:05:35.373 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.373 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.373 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.631 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.631 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.631 18:21:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.631 18:21:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.895 18:21:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.895 18:21:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.895 18:21:43 -- setup/devices.sh@68 -- # return 0 00:05:35.895 18:21:43 -- setup/devices.sh@187 -- # cleanup_dm 00:05:35.895 18:21:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.895 18:21:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.895 18:21:43 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.895 18:21:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.895 18:21:43 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.895 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.895 18:21:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.895 18:21:43 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.895 00:05:35.895 real 0m4.520s 00:05:35.895 user 0m0.672s 00:05:35.895 sys 0m0.746s 00:05:35.895 18:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.895 18:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 ************************************ 00:05:35.895 END TEST dm_mount 00:05:35.895 ************************************ 00:05:35.895 18:21:43 -- setup/devices.sh@1 -- # cleanup 00:05:35.895 18:21:43 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.895 18:21:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.895 18:21:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.895 18:21:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.895 18:21:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.895 18:21:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.151 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.151 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.151 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.151 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.151 18:21:43 -- setup/devices.sh@12 -- # cleanup_dm 00:05:36.151 18:21:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.151 18:21:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:36.151 18:21:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.151 18:21:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:36.151 18:21:43 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.151 18:21:43 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:36.151 ************************************ 00:05:36.151 END TEST devices 00:05:36.151 ************************************ 00:05:36.151 00:05:36.151 real 0m10.547s 00:05:36.151 user 0m2.331s 00:05:36.151 sys 0m2.482s 00:05:36.151 18:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.151 18:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.151 00:05:36.151 real 0m21.910s 00:05:36.151 user 0m7.240s 00:05:36.151 sys 0m8.861s 00:05:36.151 18:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.151 18:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.151 ************************************ 00:05:36.151 END TEST setup.sh 00:05:36.151 ************************************ 00:05:36.151 18:21:43 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.408 Hugepages 00:05:36.408 node hugesize free / total 00:05:36.408 node0 1048576kB 0 / 0 00:05:36.408 node0 2048kB 2048 / 2048 00:05:36.408 00:05:36.408 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.408 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.408 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.665 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:36.665 18:21:43 -- spdk/autotest.sh@141 -- # uname -s 00:05:36.665 18:21:43 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:36.665 18:21:43 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:36.665 18:21:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.230 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.487 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.487 18:21:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:38.418 18:21:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:38.418 18:21:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:38.418 18:21:45 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.418 18:21:45 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:38.418 18:21:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.418 18:21:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.418 18:21:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.418 18:21:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.418 18:21:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.418 18:21:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.418 18:21:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:38.418 18:21:45 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.931 Waiting for block devices as requested 00:05:38.931 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.931 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.931 18:21:46 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:38.931 18:21:46 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:38.931 18:21:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:38.931 18:21:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.931 18:21:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:38.931 18:21:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:38.931 18:21:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:38.931 18:21:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:38.931 18:21:46 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:38.931 18:21:46 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:39.189 18:21:46 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:39.189 18:21:46 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:39.189 18:21:46 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1542 -- # continue 00:05:39.189 18:21:46 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:39.189 18:21:46 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:39.189 18:21:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:39.189 18:21:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:39.189 18:21:46 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:39.189 18:21:46 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:39.189 18:21:46 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:39.189 18:21:46 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:39.189 18:21:46 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:39.189 18:21:46 -- common/autotest_common.sh@1542 -- # continue 00:05:39.189 18:21:46 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:39.189 18:21:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.189 18:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:39.189 18:21:46 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:39.189 18:21:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.189 18:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:39.189 18:21:46 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.754 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.012 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.012 18:21:47 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:40.012 18:21:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.012 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.012 18:21:47 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:40.012 18:21:47 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:40.012 18:21:47 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.012 18:21:47 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:40.012 18:21:47 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:40.012 18:21:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:40.012 18:21:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:40.012 18:21:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:40.012 18:21:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.012 18:21:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.012 18:21:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:40.012 18:21:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:40.012 18:21:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:40.012 18:21:47 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:40.012 18:21:47 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:40.012 18:21:47 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:40.012 18:21:47 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.012 18:21:47 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:40.012 18:21:47 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:40.012 18:21:47 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:40.012 18:21:47 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.012 18:21:47 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:40.012 18:21:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:40.012 18:21:47 -- common/autotest_common.sh@1578 -- # return 0 00:05:40.012 18:21:47 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:40.012 18:21:47 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:40.012 18:21:47 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:40.012 18:21:47 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:40.012 18:21:47 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:40.012 18:21:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:40.012 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.012 18:21:47 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.012 18:21:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.012 18:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.012 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.012 ************************************ 00:05:40.012 START TEST env 00:05:40.012 ************************************ 00:05:40.012 18:21:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.270 * Looking for test storage... 00:05:40.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:40.270 18:21:47 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.270 18:21:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.270 18:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.270 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.270 ************************************ 00:05:40.270 START TEST env_memory 00:05:40.270 ************************************ 00:05:40.270 18:21:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.270 00:05:40.270 00:05:40.270 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.270 http://cunit.sourceforge.net/ 00:05:40.270 00:05:40.270 00:05:40.270 Suite: memory 00:05:40.270 Test: alloc and free memory map ...[2024-07-14 18:21:47.548610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.270 passed 00:05:40.270 Test: mem map translation ...[2024-07-14 18:21:47.579631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:40.270 [2024-07-14 18:21:47.579674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:40.270 [2024-07-14 18:21:47.579731] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:40.270 [2024-07-14 18:21:47.579741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:40.270 passed 00:05:40.270 Test: mem map registration ...[2024-07-14 18:21:47.643352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:40.270 [2024-07-14 18:21:47.643385] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:40.270 passed 00:05:40.528 Test: mem map adjacent registrations ...passed 00:05:40.528 00:05:40.528 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.528 suites 1 1 n/a 0 0 00:05:40.528 tests 4 4 4 0 0 00:05:40.528 asserts 152 152 152 0 n/a 00:05:40.528 00:05:40.528 Elapsed time = 0.213 seconds 00:05:40.528 00:05:40.528 real 0m0.229s 00:05:40.528 user 0m0.212s 00:05:40.528 sys 0m0.015s 00:05:40.528 18:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.528 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.528 ************************************ 00:05:40.528 END TEST env_memory 00:05:40.528 ************************************ 00:05:40.528 18:21:47 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:40.528 18:21:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.528 18:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.528 18:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.528 ************************************ 00:05:40.528 START TEST env_vtophys 00:05:40.528 ************************************ 00:05:40.528 18:21:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:40.528 EAL: lib.eal log level changed from notice to debug 00:05:40.528 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 1 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 2 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 3 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 4 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 5 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 6 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 7 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 8 as core 0 on socket 0 00:05:40.528 EAL: Detected lcore 9 as core 0 on socket 0 00:05:40.528 EAL: Maximum logical cores by configuration: 128 00:05:40.528 EAL: Detected CPU lcores: 10 00:05:40.528 EAL: Detected NUMA nodes: 1 00:05:40.528 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:40.528 EAL: Detected shared linkage of DPDK 00:05:40.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:40.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:40.528 EAL: Registered [vdev] bus. 00:05:40.528 EAL: bus.vdev log level changed from disabled to notice 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:40.529 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:40.529 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:40.529 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:40.529 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Selected IOVA mode 'PA' 00:05:40.529 EAL: Probing VFIO support... 00:05:40.529 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:40.529 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:40.529 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.529 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.529 EAL: Setting up physically contiguous memory... 00:05:40.529 EAL: Setting maximum number of open files to 524288 00:05:40.529 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.529 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.529 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.529 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.529 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.529 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.529 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.529 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.529 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.529 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.529 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.529 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.529 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.529 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.529 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.529 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.529 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.529 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.529 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.529 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.529 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.529 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.529 EAL: Hugepages will be freed exactly as allocated. 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: TSC frequency is ~2200000 KHz 00:05:40.529 EAL: Main lcore 0 is ready (tid=7efcf48d8a00;cpuset=[0]) 00:05:40.529 EAL: Trying to obtain current memory policy. 00:05:40.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.529 EAL: Restoring previous memory policy: 0 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.529 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.529 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.529 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:40.529 00:05:40.529 00:05:40.529 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.529 http://cunit.sourceforge.net/ 00:05:40.529 00:05:40.529 00:05:40.529 Suite: components_suite 00:05:40.529 Test: vtophys_malloc_test ...passed 00:05:40.529 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.529 EAL: Restoring previous memory policy: 4 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.529 EAL: Trying to obtain current memory policy. 00:05:40.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.529 EAL: Restoring previous memory policy: 4 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.529 EAL: Trying to obtain current memory policy. 00:05:40.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.529 EAL: Restoring previous memory policy: 4 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.529 EAL: Trying to obtain current memory policy. 00:05:40.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.529 EAL: Restoring previous memory policy: 4 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.529 EAL: request: mp_malloc_sync 00:05:40.529 EAL: No shared files mode enabled, IPC is disabled 00:05:40.529 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.788 EAL: Trying to obtain current memory policy. 00:05:40.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.788 EAL: Restoring previous memory policy: 4 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.788 EAL: Trying to obtain current memory policy. 00:05:40.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.788 EAL: Restoring previous memory policy: 4 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.788 EAL: Trying to obtain current memory policy. 00:05:40.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.788 EAL: Restoring previous memory policy: 4 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.788 EAL: Trying to obtain current memory policy. 00:05:40.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.788 EAL: Restoring previous memory policy: 4 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.788 EAL: request: mp_malloc_sync 00:05:40.788 EAL: No shared files mode enabled, IPC is disabled 00:05:40.788 EAL: Heap on socket 0 was expanded by 258MB 00:05:40.788 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.048 EAL: request: mp_malloc_sync 00:05:41.048 EAL: No shared files mode enabled, IPC is disabled 00:05:41.048 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.048 EAL: Trying to obtain current memory policy. 00:05:41.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.048 EAL: Restoring previous memory policy: 4 00:05:41.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.048 EAL: request: mp_malloc_sync 00:05:41.048 EAL: No shared files mode enabled, IPC is disabled 00:05:41.048 EAL: Heap on socket 0 was expanded by 514MB 00:05:41.306 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.306 EAL: request: mp_malloc_sync 00:05:41.306 EAL: No shared files mode enabled, IPC is disabled 00:05:41.306 EAL: Heap on socket 0 was shrunk by 514MB 00:05:41.306 EAL: Trying to obtain current memory policy. 00:05:41.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.565 EAL: Restoring previous memory policy: 4 00:05:41.565 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.565 EAL: request: mp_malloc_sync 00:05:41.565 EAL: No shared files mode enabled, IPC is disabled 00:05:41.565 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.824 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.082 EAL: request: mp_malloc_sync 00:05:42.082 EAL: No shared files mode enabled, IPC is disabled 00:05:42.082 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.082 passed 00:05:42.082 00:05:42.082 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.082 suites 1 1 n/a 0 0 00:05:42.082 tests 2 2 2 0 0 00:05:42.082 asserts 5183 5183 5183 0 n/a 00:05:42.082 00:05:42.082 Elapsed time = 1.332 seconds 00:05:42.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.082 EAL: request: mp_malloc_sync 00:05:42.082 EAL: No shared files mode enabled, IPC is disabled 00:05:42.082 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.082 EAL: No shared files mode enabled, IPC is disabled 00:05:42.082 EAL: No shared files mode enabled, IPC is disabled 00:05:42.082 EAL: No shared files mode enabled, IPC is disabled 00:05:42.082 00:05:42.082 real 0m1.530s 00:05:42.082 user 0m0.846s 00:05:42.082 sys 0m0.552s 00:05:42.082 18:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.082 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.082 ************************************ 00:05:42.082 END TEST env_vtophys 00:05:42.082 ************************************ 00:05:42.082 18:21:49 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.082 18:21:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.082 18:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.082 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.082 ************************************ 00:05:42.082 START TEST env_pci 00:05:42.082 ************************************ 00:05:42.082 18:21:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.082 00:05:42.082 00:05:42.082 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.082 http://cunit.sourceforge.net/ 00:05:42.082 00:05:42.082 00:05:42.082 Suite: pci 00:05:42.082 Test: pci_hook ...[2024-07-14 18:21:49.374407] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67537 has claimed it 00:05:42.082 passed 00:05:42.082 00:05:42.082 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.082 suites 1 1 n/a 0 0 00:05:42.082 tests 1 1 1 0 0 00:05:42.082 asserts 25 25 25 0 n/a 00:05:42.082 00:05:42.082 Elapsed time = 0.002 seconds 00:05:42.082 EAL: Cannot find device (10000:00:01.0) 00:05:42.082 EAL: Failed to attach device on primary process 00:05:42.082 00:05:42.082 real 0m0.019s 00:05:42.082 user 0m0.008s 00:05:42.082 sys 0m0.010s 00:05:42.082 18:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.082 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.082 ************************************ 00:05:42.082 END TEST env_pci 00:05:42.082 ************************************ 00:05:42.082 18:21:49 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:42.082 18:21:49 -- env/env.sh@15 -- # uname 00:05:42.082 18:21:49 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:42.082 18:21:49 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:42.082 18:21:49 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.082 18:21:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:42.082 18:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.082 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.082 ************************************ 00:05:42.082 START TEST env_dpdk_post_init 00:05:42.082 ************************************ 00:05:42.082 18:21:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.082 EAL: Detected CPU lcores: 10 00:05:42.082 EAL: Detected NUMA nodes: 1 00:05:42.082 EAL: Detected shared linkage of DPDK 00:05:42.082 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.082 EAL: Selected IOVA mode 'PA' 00:05:42.340 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.340 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:42.340 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:42.340 Starting DPDK initialization... 00:05:42.340 Starting SPDK post initialization... 00:05:42.340 SPDK NVMe probe 00:05:42.340 Attaching to 0000:00:06.0 00:05:42.340 Attaching to 0000:00:07.0 00:05:42.340 Attached to 0000:00:06.0 00:05:42.340 Attached to 0000:00:07.0 00:05:42.340 Cleaning up... 00:05:42.340 00:05:42.340 real 0m0.177s 00:05:42.340 user 0m0.042s 00:05:42.340 sys 0m0.035s 00:05:42.340 18:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.340 ************************************ 00:05:42.340 END TEST env_dpdk_post_init 00:05:42.340 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.340 ************************************ 00:05:42.340 18:21:49 -- env/env.sh@26 -- # uname 00:05:42.340 18:21:49 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.340 18:21:49 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.340 18:21:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.340 18:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.340 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.340 ************************************ 00:05:42.340 START TEST env_mem_callbacks 00:05:42.340 ************************************ 00:05:42.340 18:21:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.340 EAL: Detected CPU lcores: 10 00:05:42.340 EAL: Detected NUMA nodes: 1 00:05:42.340 EAL: Detected shared linkage of DPDK 00:05:42.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.340 EAL: Selected IOVA mode 'PA' 00:05:42.599 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.599 00:05:42.599 00:05:42.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.599 http://cunit.sourceforge.net/ 00:05:42.599 00:05:42.599 00:05:42.599 Suite: memory 00:05:42.599 Test: test ... 00:05:42.599 register 0x200000200000 2097152 00:05:42.599 malloc 3145728 00:05:42.599 register 0x200000400000 4194304 00:05:42.599 buf 0x200000500000 len 3145728 PASSED 00:05:42.599 malloc 64 00:05:42.599 buf 0x2000004fff40 len 64 PASSED 00:05:42.599 malloc 4194304 00:05:42.599 register 0x200000800000 6291456 00:05:42.599 buf 0x200000a00000 len 4194304 PASSED 00:05:42.599 free 0x200000500000 3145728 00:05:42.599 free 0x2000004fff40 64 00:05:42.599 unregister 0x200000400000 4194304 PASSED 00:05:42.599 free 0x200000a00000 4194304 00:05:42.599 unregister 0x200000800000 6291456 PASSED 00:05:42.599 malloc 8388608 00:05:42.599 register 0x200000400000 10485760 00:05:42.599 buf 0x200000600000 len 8388608 PASSED 00:05:42.599 free 0x200000600000 8388608 00:05:42.599 unregister 0x200000400000 10485760 PASSED 00:05:42.599 passed 00:05:42.599 00:05:42.599 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.599 suites 1 1 n/a 0 0 00:05:42.599 tests 1 1 1 0 0 00:05:42.599 asserts 15 15 15 0 n/a 00:05:42.599 00:05:42.599 Elapsed time = 0.009 seconds 00:05:42.599 00:05:42.599 real 0m0.148s 00:05:42.599 user 0m0.017s 00:05:42.599 sys 0m0.028s 00:05:42.599 18:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.599 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.599 ************************************ 00:05:42.599 END TEST env_mem_callbacks 00:05:42.599 ************************************ 00:05:42.599 ************************************ 00:05:42.599 END TEST env 00:05:42.599 ************************************ 00:05:42.599 00:05:42.599 real 0m2.442s 00:05:42.599 user 0m1.224s 00:05:42.599 sys 0m0.862s 00:05:42.599 18:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.599 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.599 18:21:49 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:42.599 18:21:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.599 18:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.599 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.599 ************************************ 00:05:42.599 START TEST rpc 00:05:42.599 ************************************ 00:05:42.599 18:21:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:42.599 * Looking for test storage... 00:05:42.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.599 18:21:49 -- rpc/rpc.sh@65 -- # spdk_pid=67650 00:05:42.599 18:21:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.599 18:21:49 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:42.599 18:21:49 -- rpc/rpc.sh@67 -- # waitforlisten 67650 00:05:42.599 18:21:49 -- common/autotest_common.sh@819 -- # '[' -z 67650 ']' 00:05:42.599 18:21:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.599 18:21:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.599 18:21:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.599 18:21:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.599 18:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.857 [2024-07-14 18:21:50.046372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.857 [2024-07-14 18:21:50.046469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67650 ] 00:05:42.857 [2024-07-14 18:21:50.186979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.857 [2024-07-14 18:21:50.276320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.857 [2024-07-14 18:21:50.276484] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:42.857 [2024-07-14 18:21:50.276522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67650' to capture a snapshot of events at runtime. 00:05:42.857 [2024-07-14 18:21:50.276531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67650 for offline analysis/debug. 00:05:42.857 [2024-07-14 18:21:50.276557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.791 18:21:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.791 18:21:51 -- common/autotest_common.sh@852 -- # return 0 00:05:43.791 18:21:51 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.791 18:21:51 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.791 18:21:51 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.791 18:21:51 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.791 18:21:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.791 18:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.791 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 ************************************ 00:05:43.791 START TEST rpc_integrity 00:05:43.791 ************************************ 00:05:43.791 18:21:51 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:43.791 18:21:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.791 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.791 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.791 18:21:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.791 18:21:51 -- rpc/rpc.sh@13 -- # jq length 00:05:43.791 18:21:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.791 18:21:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.791 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.791 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.791 18:21:51 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.791 18:21:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.791 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.791 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.791 18:21:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.791 { 00:05:43.791 "aliases": [ 00:05:43.791 "e3426a36-b246-417a-badb-af82bb5852e2" 00:05:43.791 ], 00:05:43.791 "assigned_rate_limits": { 00:05:43.791 "r_mbytes_per_sec": 0, 00:05:43.791 "rw_ios_per_sec": 0, 00:05:43.791 "rw_mbytes_per_sec": 0, 00:05:43.791 "w_mbytes_per_sec": 0 00:05:43.791 }, 00:05:43.791 "block_size": 512, 00:05:43.791 "claimed": false, 00:05:43.791 "driver_specific": {}, 00:05:43.791 "memory_domains": [ 00:05:43.791 { 00:05:43.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.791 "dma_device_type": 2 00:05:43.791 } 00:05:43.791 ], 00:05:43.791 "name": "Malloc0", 00:05:43.791 "num_blocks": 16384, 00:05:43.791 "product_name": "Malloc disk", 00:05:43.791 "supported_io_types": { 00:05:43.791 "abort": true, 00:05:43.791 "compare": false, 00:05:43.791 "compare_and_write": false, 00:05:43.791 "flush": true, 00:05:43.791 "nvme_admin": false, 00:05:43.791 "nvme_io": false, 00:05:43.791 "read": true, 00:05:43.791 "reset": true, 00:05:43.791 "unmap": true, 00:05:43.791 "write": true, 00:05:43.791 "write_zeroes": true 00:05:43.791 }, 00:05:43.791 "uuid": "e3426a36-b246-417a-badb-af82bb5852e2", 00:05:43.791 "zoned": false 00:05:43.791 } 00:05:43.791 ]' 00:05:43.791 18:21:51 -- rpc/rpc.sh@17 -- # jq length 00:05:44.049 18:21:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.050 18:21:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 [2024-07-14 18:21:51.235567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.050 [2024-07-14 18:21:51.235622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.050 [2024-07-14 18:21:51.235659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17130b0 00:05:44.050 [2024-07-14 18:21:51.235669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.050 [2024-07-14 18:21:51.237446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.050 [2024-07-14 18:21:51.237518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.050 Passthru0 00:05:44.050 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.050 18:21:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.050 18:21:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.050 { 00:05:44.050 "aliases": [ 00:05:44.050 "e3426a36-b246-417a-badb-af82bb5852e2" 00:05:44.050 ], 00:05:44.050 "assigned_rate_limits": { 00:05:44.050 "r_mbytes_per_sec": 0, 00:05:44.050 "rw_ios_per_sec": 0, 00:05:44.050 "rw_mbytes_per_sec": 0, 00:05:44.050 "w_mbytes_per_sec": 0 00:05:44.050 }, 00:05:44.050 "block_size": 512, 00:05:44.050 "claim_type": "exclusive_write", 00:05:44.050 "claimed": true, 00:05:44.050 "driver_specific": {}, 00:05:44.050 "memory_domains": [ 00:05:44.050 { 00:05:44.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.050 "dma_device_type": 2 00:05:44.050 } 00:05:44.050 ], 00:05:44.050 "name": "Malloc0", 00:05:44.050 "num_blocks": 16384, 00:05:44.050 "product_name": "Malloc disk", 00:05:44.050 "supported_io_types": { 00:05:44.050 "abort": true, 00:05:44.050 "compare": false, 00:05:44.050 "compare_and_write": false, 00:05:44.050 "flush": true, 00:05:44.050 "nvme_admin": false, 00:05:44.050 "nvme_io": false, 00:05:44.050 "read": true, 00:05:44.050 "reset": true, 00:05:44.050 "unmap": true, 00:05:44.050 "write": true, 00:05:44.050 "write_zeroes": true 00:05:44.050 }, 00:05:44.050 "uuid": "e3426a36-b246-417a-badb-af82bb5852e2", 00:05:44.050 "zoned": false 00:05:44.050 }, 00:05:44.050 { 00:05:44.050 "aliases": [ 00:05:44.050 "fd8e22ba-f529-5b11-885e-96ff9b723ebe" 00:05:44.050 ], 00:05:44.050 "assigned_rate_limits": { 00:05:44.050 "r_mbytes_per_sec": 0, 00:05:44.050 "rw_ios_per_sec": 0, 00:05:44.050 "rw_mbytes_per_sec": 0, 00:05:44.050 "w_mbytes_per_sec": 0 00:05:44.050 }, 00:05:44.050 "block_size": 512, 00:05:44.050 "claimed": false, 00:05:44.050 "driver_specific": { 00:05:44.050 "passthru": { 00:05:44.050 "base_bdev_name": "Malloc0", 00:05:44.050 "name": "Passthru0" 00:05:44.050 } 00:05:44.050 }, 00:05:44.050 "memory_domains": [ 00:05:44.050 { 00:05:44.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.050 "dma_device_type": 2 00:05:44.050 } 00:05:44.050 ], 00:05:44.050 "name": "Passthru0", 00:05:44.050 "num_blocks": 16384, 00:05:44.050 "product_name": "passthru", 00:05:44.050 "supported_io_types": { 00:05:44.050 "abort": true, 00:05:44.050 "compare": false, 00:05:44.050 "compare_and_write": false, 00:05:44.050 "flush": true, 00:05:44.050 "nvme_admin": false, 00:05:44.050 "nvme_io": false, 00:05:44.050 "read": true, 00:05:44.050 "reset": true, 00:05:44.050 "unmap": true, 00:05:44.050 "write": true, 00:05:44.050 "write_zeroes": true 00:05:44.050 }, 00:05:44.050 "uuid": "fd8e22ba-f529-5b11-885e-96ff9b723ebe", 00:05:44.050 "zoned": false 00:05:44.050 } 00:05:44.050 ]' 00:05:44.050 18:21:51 -- rpc/rpc.sh@21 -- # jq length 00:05:44.050 18:21:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.050 18:21:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.050 18:21:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.050 18:21:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.050 18:21:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.050 18:21:51 -- rpc/rpc.sh@26 -- # jq length 00:05:44.050 ************************************ 00:05:44.050 END TEST rpc_integrity 00:05:44.050 ************************************ 00:05:44.050 18:21:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.050 00:05:44.050 real 0m0.339s 00:05:44.050 user 0m0.223s 00:05:44.050 sys 0m0.034s 00:05:44.050 18:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 18:21:51 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.050 18:21:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.050 18:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.050 ************************************ 00:05:44.050 START TEST rpc_plugins 00:05:44.050 ************************************ 00:05:44.050 18:21:51 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:44.050 18:21:51 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.050 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.050 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.309 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.309 18:21:51 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.309 18:21:51 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.309 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.309 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.309 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.309 18:21:51 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.309 { 00:05:44.309 "aliases": [ 00:05:44.309 "d04179b6-969f-40a1-93f3-2379ec9988d8" 00:05:44.309 ], 00:05:44.309 "assigned_rate_limits": { 00:05:44.309 "r_mbytes_per_sec": 0, 00:05:44.309 "rw_ios_per_sec": 0, 00:05:44.309 "rw_mbytes_per_sec": 0, 00:05:44.309 "w_mbytes_per_sec": 0 00:05:44.309 }, 00:05:44.309 "block_size": 4096, 00:05:44.309 "claimed": false, 00:05:44.310 "driver_specific": {}, 00:05:44.310 "memory_domains": [ 00:05:44.310 { 00:05:44.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.310 "dma_device_type": 2 00:05:44.310 } 00:05:44.310 ], 00:05:44.310 "name": "Malloc1", 00:05:44.310 "num_blocks": 256, 00:05:44.310 "product_name": "Malloc disk", 00:05:44.310 "supported_io_types": { 00:05:44.310 "abort": true, 00:05:44.310 "compare": false, 00:05:44.310 "compare_and_write": false, 00:05:44.310 "flush": true, 00:05:44.310 "nvme_admin": false, 00:05:44.310 "nvme_io": false, 00:05:44.310 "read": true, 00:05:44.310 "reset": true, 00:05:44.310 "unmap": true, 00:05:44.310 "write": true, 00:05:44.310 "write_zeroes": true 00:05:44.310 }, 00:05:44.310 "uuid": "d04179b6-969f-40a1-93f3-2379ec9988d8", 00:05:44.310 "zoned": false 00:05:44.310 } 00:05:44.310 ]' 00:05:44.310 18:21:51 -- rpc/rpc.sh@32 -- # jq length 00:05:44.310 18:21:51 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.310 18:21:51 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.310 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.310 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.310 18:21:51 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.310 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.310 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.310 18:21:51 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.310 18:21:51 -- rpc/rpc.sh@36 -- # jq length 00:05:44.310 ************************************ 00:05:44.310 END TEST rpc_plugins 00:05:44.310 ************************************ 00:05:44.310 18:21:51 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.310 00:05:44.310 real 0m0.172s 00:05:44.310 user 0m0.114s 00:05:44.310 sys 0m0.019s 00:05:44.310 18:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.310 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 18:21:51 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:44.310 18:21:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.310 18:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.310 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 ************************************ 00:05:44.310 START TEST rpc_trace_cmd_test 00:05:44.310 ************************************ 00:05:44.310 18:21:51 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:44.310 18:21:51 -- rpc/rpc.sh@40 -- # local info 00:05:44.310 18:21:51 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:44.310 18:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.310 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 18:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.310 18:21:51 -- rpc/rpc.sh@42 -- # info='{ 00:05:44.310 "bdev": { 00:05:44.310 "mask": "0x8", 00:05:44.310 "tpoint_mask": "0xffffffffffffffff" 00:05:44.310 }, 00:05:44.310 "bdev_nvme": { 00:05:44.310 "mask": "0x4000", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "blobfs": { 00:05:44.310 "mask": "0x80", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "dsa": { 00:05:44.310 "mask": "0x200", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "ftl": { 00:05:44.310 "mask": "0x40", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "iaa": { 00:05:44.310 "mask": "0x1000", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "iscsi_conn": { 00:05:44.310 "mask": "0x2", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "nvme_pcie": { 00:05:44.310 "mask": "0x800", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "nvme_tcp": { 00:05:44.310 "mask": "0x2000", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "nvmf_rdma": { 00:05:44.310 "mask": "0x10", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "nvmf_tcp": { 00:05:44.310 "mask": "0x20", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "scsi": { 00:05:44.310 "mask": "0x4", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "thread": { 00:05:44.310 "mask": "0x400", 00:05:44.310 "tpoint_mask": "0x0" 00:05:44.310 }, 00:05:44.310 "tpoint_group_mask": "0x8", 00:05:44.310 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67650" 00:05:44.310 }' 00:05:44.310 18:21:51 -- rpc/rpc.sh@43 -- # jq length 00:05:44.568 18:21:51 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:44.568 18:21:51 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.568 18:21:51 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.568 18:21:51 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:44.568 18:21:51 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:44.568 18:21:51 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:44.568 18:21:51 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:44.568 18:21:51 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:44.826 ************************************ 00:05:44.826 END TEST rpc_trace_cmd_test 00:05:44.826 ************************************ 00:05:44.826 18:21:51 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:44.826 00:05:44.826 real 0m0.301s 00:05:44.826 user 0m0.257s 00:05:44.826 sys 0m0.032s 00:05:44.826 18:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.826 18:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.826 18:21:52 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:44.826 18:21:52 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:44.826 18:21:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.826 18:21:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.826 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.826 ************************************ 00:05:44.826 START TEST go_rpc 00:05:44.826 ************************************ 00:05:44.826 18:21:52 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:44.826 18:21:52 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.826 18:21:52 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:44.826 18:21:52 -- rpc/rpc.sh@52 -- # jq length 00:05:44.826 18:21:52 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:44.826 18:21:52 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.826 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.826 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.826 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.826 18:21:52 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:44.826 18:21:52 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.826 18:21:52 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c2a59bda-398c-48d5-bb36-f935b1e4aa1c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"c2a59bda-398c-48d5-bb36-f935b1e4aa1c","zoned":false}]' 00:05:44.826 18:21:52 -- rpc/rpc.sh@57 -- # jq length 00:05:44.826 18:21:52 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:44.826 18:21:52 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.826 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.826 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.826 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.826 18:21:52 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.826 18:21:52 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:44.826 18:21:52 -- rpc/rpc.sh@61 -- # jq length 00:05:45.084 ************************************ 00:05:45.084 END TEST go_rpc 00:05:45.084 ************************************ 00:05:45.084 18:21:52 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:45.084 00:05:45.084 real 0m0.233s 00:05:45.084 user 0m0.158s 00:05:45.084 sys 0m0.041s 00:05:45.084 18:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.084 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.084 18:21:52 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.084 18:21:52 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.084 18:21:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.084 18:21:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.084 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.084 ************************************ 00:05:45.084 START TEST rpc_daemon_integrity 00:05:45.084 ************************************ 00:05:45.084 18:21:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:45.084 18:21:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.084 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.084 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.084 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.085 18:21:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.085 18:21:52 -- rpc/rpc.sh@13 -- # jq length 00:05:45.085 18:21:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.085 18:21:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.085 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.085 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.085 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.085 18:21:52 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:45.085 18:21:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.085 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.085 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.085 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.085 18:21:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.085 { 00:05:45.085 "aliases": [ 00:05:45.085 "78b67b95-ec17-4085-9ab2-26604ff49643" 00:05:45.085 ], 00:05:45.085 "assigned_rate_limits": { 00:05:45.085 "r_mbytes_per_sec": 0, 00:05:45.085 "rw_ios_per_sec": 0, 00:05:45.085 "rw_mbytes_per_sec": 0, 00:05:45.085 "w_mbytes_per_sec": 0 00:05:45.085 }, 00:05:45.085 "block_size": 512, 00:05:45.085 "claimed": false, 00:05:45.085 "driver_specific": {}, 00:05:45.085 "memory_domains": [ 00:05:45.085 { 00:05:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.085 "dma_device_type": 2 00:05:45.085 } 00:05:45.085 ], 00:05:45.085 "name": "Malloc3", 00:05:45.085 "num_blocks": 16384, 00:05:45.085 "product_name": "Malloc disk", 00:05:45.085 "supported_io_types": { 00:05:45.085 "abort": true, 00:05:45.085 "compare": false, 00:05:45.085 "compare_and_write": false, 00:05:45.085 "flush": true, 00:05:45.085 "nvme_admin": false, 00:05:45.085 "nvme_io": false, 00:05:45.085 "read": true, 00:05:45.085 "reset": true, 00:05:45.085 "unmap": true, 00:05:45.085 "write": true, 00:05:45.085 "write_zeroes": true 00:05:45.085 }, 00:05:45.085 "uuid": "78b67b95-ec17-4085-9ab2-26604ff49643", 00:05:45.085 "zoned": false 00:05:45.085 } 00:05:45.085 ]' 00:05:45.085 18:21:52 -- rpc/rpc.sh@17 -- # jq length 00:05:45.085 18:21:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.085 18:21:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:45.085 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.085 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.085 [2024-07-14 18:21:52.490185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:45.085 [2024-07-14 18:21:52.490250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.085 [2024-07-14 18:21:52.490269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18b2b50 00:05:45.085 [2024-07-14 18:21:52.490278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.085 [2024-07-14 18:21:52.492014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.085 [2024-07-14 18:21:52.492062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.085 Passthru0 00:05:45.085 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.085 18:21:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.085 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.085 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.343 18:21:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.343 { 00:05:45.343 "aliases": [ 00:05:45.343 "78b67b95-ec17-4085-9ab2-26604ff49643" 00:05:45.343 ], 00:05:45.343 "assigned_rate_limits": { 00:05:45.343 "r_mbytes_per_sec": 0, 00:05:45.343 "rw_ios_per_sec": 0, 00:05:45.343 "rw_mbytes_per_sec": 0, 00:05:45.343 "w_mbytes_per_sec": 0 00:05:45.343 }, 00:05:45.343 "block_size": 512, 00:05:45.343 "claim_type": "exclusive_write", 00:05:45.343 "claimed": true, 00:05:45.343 "driver_specific": {}, 00:05:45.343 "memory_domains": [ 00:05:45.343 { 00:05:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.343 "dma_device_type": 2 00:05:45.343 } 00:05:45.343 ], 00:05:45.343 "name": "Malloc3", 00:05:45.343 "num_blocks": 16384, 00:05:45.343 "product_name": "Malloc disk", 00:05:45.343 "supported_io_types": { 00:05:45.343 "abort": true, 00:05:45.343 "compare": false, 00:05:45.343 "compare_and_write": false, 00:05:45.343 "flush": true, 00:05:45.343 "nvme_admin": false, 00:05:45.343 "nvme_io": false, 00:05:45.343 "read": true, 00:05:45.343 "reset": true, 00:05:45.343 "unmap": true, 00:05:45.343 "write": true, 00:05:45.343 "write_zeroes": true 00:05:45.343 }, 00:05:45.343 "uuid": "78b67b95-ec17-4085-9ab2-26604ff49643", 00:05:45.343 "zoned": false 00:05:45.343 }, 00:05:45.343 { 00:05:45.343 "aliases": [ 00:05:45.343 "ef7ac0a6-95eb-5aea-a21c-59bdca8033c4" 00:05:45.343 ], 00:05:45.343 "assigned_rate_limits": { 00:05:45.343 "r_mbytes_per_sec": 0, 00:05:45.343 "rw_ios_per_sec": 0, 00:05:45.343 "rw_mbytes_per_sec": 0, 00:05:45.343 "w_mbytes_per_sec": 0 00:05:45.343 }, 00:05:45.343 "block_size": 512, 00:05:45.343 "claimed": false, 00:05:45.343 "driver_specific": { 00:05:45.343 "passthru": { 00:05:45.343 "base_bdev_name": "Malloc3", 00:05:45.343 "name": "Passthru0" 00:05:45.343 } 00:05:45.343 }, 00:05:45.343 "memory_domains": [ 00:05:45.343 { 00:05:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.343 "dma_device_type": 2 00:05:45.343 } 00:05:45.343 ], 00:05:45.343 "name": "Passthru0", 00:05:45.343 "num_blocks": 16384, 00:05:45.343 "product_name": "passthru", 00:05:45.343 "supported_io_types": { 00:05:45.343 "abort": true, 00:05:45.343 "compare": false, 00:05:45.343 "compare_and_write": false, 00:05:45.343 "flush": true, 00:05:45.343 "nvme_admin": false, 00:05:45.343 "nvme_io": false, 00:05:45.343 "read": true, 00:05:45.343 "reset": true, 00:05:45.343 "unmap": true, 00:05:45.343 "write": true, 00:05:45.343 "write_zeroes": true 00:05:45.343 }, 00:05:45.343 "uuid": "ef7ac0a6-95eb-5aea-a21c-59bdca8033c4", 00:05:45.343 "zoned": false 00:05:45.343 } 00:05:45.343 ]' 00:05:45.343 18:21:52 -- rpc/rpc.sh@21 -- # jq length 00:05:45.343 18:21:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.343 18:21:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.343 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.343 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.343 18:21:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:45.343 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.343 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.343 18:21:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.343 18:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.343 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 18:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.343 18:21:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.343 18:21:52 -- rpc/rpc.sh@26 -- # jq length 00:05:45.343 ************************************ 00:05:45.343 END TEST rpc_daemon_integrity 00:05:45.343 ************************************ 00:05:45.343 18:21:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.343 00:05:45.343 real 0m0.329s 00:05:45.343 user 0m0.217s 00:05:45.343 sys 0m0.045s 00:05:45.343 18:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.343 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 18:21:52 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.343 18:21:52 -- rpc/rpc.sh@84 -- # killprocess 67650 00:05:45.343 18:21:52 -- common/autotest_common.sh@926 -- # '[' -z 67650 ']' 00:05:45.343 18:21:52 -- common/autotest_common.sh@930 -- # kill -0 67650 00:05:45.343 18:21:52 -- common/autotest_common.sh@931 -- # uname 00:05:45.343 18:21:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.343 18:21:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67650 00:05:45.343 killing process with pid 67650 00:05:45.343 18:21:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.343 18:21:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.343 18:21:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67650' 00:05:45.343 18:21:52 -- common/autotest_common.sh@945 -- # kill 67650 00:05:45.343 18:21:52 -- common/autotest_common.sh@950 -- # wait 67650 00:05:45.911 00:05:45.911 real 0m3.222s 00:05:45.911 user 0m4.292s 00:05:45.911 sys 0m0.784s 00:05:45.911 18:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.911 ************************************ 00:05:45.911 END TEST rpc 00:05:45.911 ************************************ 00:05:45.911 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.911 18:21:53 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.911 18:21:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.911 18:21:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.911 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.911 ************************************ 00:05:45.911 START TEST rpc_client 00:05:45.911 ************************************ 00:05:45.911 18:21:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.911 * Looking for test storage... 00:05:45.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.911 18:21:53 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.911 OK 00:05:45.911 18:21:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.911 00:05:45.911 real 0m0.095s 00:05:45.911 user 0m0.047s 00:05:45.911 sys 0m0.054s 00:05:45.911 ************************************ 00:05:45.911 END TEST rpc_client 00:05:45.911 ************************************ 00:05:45.911 18:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.911 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.911 18:21:53 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.911 18:21:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.911 18:21:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.911 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.911 ************************************ 00:05:45.911 START TEST json_config 00:05:45.911 ************************************ 00:05:45.911 18:21:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.171 18:21:53 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.171 18:21:53 -- nvmf/common.sh@7 -- # uname -s 00:05:46.171 18:21:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.171 18:21:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.171 18:21:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.171 18:21:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.171 18:21:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.171 18:21:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.171 18:21:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.171 18:21:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.171 18:21:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.171 18:21:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.171 18:21:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:05:46.171 18:21:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:05:46.171 18:21:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.171 18:21:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.171 18:21:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.171 18:21:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.171 18:21:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.171 18:21:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.171 18:21:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.171 18:21:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.171 18:21:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.172 18:21:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.172 18:21:53 -- paths/export.sh@5 -- # export PATH 00:05:46.172 18:21:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.172 18:21:53 -- nvmf/common.sh@46 -- # : 0 00:05:46.172 18:21:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.172 18:21:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.172 18:21:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.172 18:21:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.172 18:21:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.172 18:21:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.172 18:21:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.172 18:21:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.172 18:21:53 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.172 18:21:53 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.172 18:21:53 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:46.172 18:21:53 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.172 18:21:53 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:46.172 18:21:53 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.172 18:21:53 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:46.172 18:21:53 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:46.172 18:21:53 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:46.172 18:21:53 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:46.172 18:21:53 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.172 18:21:53 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:46.172 INFO: JSON configuration test init 00:05:46.172 18:21:53 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:46.172 18:21:53 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:46.172 18:21:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.172 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 18:21:53 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:46.172 18:21:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.172 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 Waiting for target to run... 00:05:46.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.172 18:21:53 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.172 18:21:53 -- json_config/json_config.sh@98 -- # local app=target 00:05:46.172 18:21:53 -- json_config/json_config.sh@99 -- # shift 00:05:46.172 18:21:53 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:46.172 18:21:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.172 18:21:53 -- json_config/json_config.sh@111 -- # app_pid[$app]=67951 00:05:46.172 18:21:53 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:46.172 18:21:53 -- json_config/json_config.sh@114 -- # waitforlisten 67951 /var/tmp/spdk_tgt.sock 00:05:46.172 18:21:53 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.172 18:21:53 -- common/autotest_common.sh@819 -- # '[' -z 67951 ']' 00:05:46.172 18:21:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.172 18:21:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.172 18:21:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.172 18:21:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.172 18:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 [2024-07-14 18:21:53.466140] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.172 [2024-07-14 18:21:53.466221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67951 ] 00:05:46.739 [2024-07-14 18:21:53.877870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.739 [2024-07-14 18:21:53.936675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.739 [2024-07-14 18:21:53.936875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.303 00:05:47.303 18:21:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.303 18:21:54 -- common/autotest_common.sh@852 -- # return 0 00:05:47.303 18:21:54 -- json_config/json_config.sh@115 -- # echo '' 00:05:47.303 18:21:54 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:47.303 18:21:54 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:47.303 18:21:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.303 18:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.303 18:21:54 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:47.303 18:21:54 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:47.303 18:21:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.303 18:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.303 18:21:54 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.303 18:21:54 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:47.303 18:21:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.561 18:21:54 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:47.561 18:21:54 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:47.561 18:21:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.561 18:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.561 18:21:54 -- json_config/json_config.sh@48 -- # local ret=0 00:05:47.561 18:21:54 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.561 18:21:54 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:47.561 18:21:54 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:47.561 18:21:54 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:47.561 18:21:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.128 18:21:55 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:48.128 18:21:55 -- json_config/json_config.sh@51 -- # local get_types 00:05:48.128 18:21:55 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:48.128 18:21:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.128 18:21:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.128 18:21:55 -- json_config/json_config.sh@58 -- # return 0 00:05:48.128 18:21:55 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:48.128 18:21:55 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:48.128 18:21:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:48.128 18:21:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.128 18:21:55 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.128 18:21:55 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:48.128 18:21:55 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.128 18:21:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.387 MallocForNvmf0 00:05:48.387 18:21:55 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.387 18:21:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.646 MallocForNvmf1 00:05:48.646 18:21:55 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.646 18:21:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.646 [2024-07-14 18:21:56.054517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.904 18:21:56 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.904 18:21:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.904 18:21:56 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.904 18:21:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.162 18:21:56 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.162 18:21:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.728 18:21:56 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.728 18:21:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.728 [2024-07-14 18:21:57.063106] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.728 18:21:57 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:49.728 18:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.728 18:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 18:21:57 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:49.728 18:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.728 18:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.985 18:21:57 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:49.985 18:21:57 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.985 18:21:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.243 MallocBdevForConfigChangeCheck 00:05:50.243 18:21:57 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:50.243 18:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.243 18:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 18:21:57 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:50.243 18:21:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.502 INFO: shutting down applications... 00:05:50.503 18:21:57 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:50.503 18:21:57 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:50.503 18:21:57 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:50.503 18:21:57 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:50.503 18:21:57 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.761 Calling clear_iscsi_subsystem 00:05:50.761 Calling clear_nvmf_subsystem 00:05:50.761 Calling clear_nbd_subsystem 00:05:50.761 Calling clear_ublk_subsystem 00:05:50.761 Calling clear_vhost_blk_subsystem 00:05:50.761 Calling clear_vhost_scsi_subsystem 00:05:50.761 Calling clear_scheduler_subsystem 00:05:50.761 Calling clear_bdev_subsystem 00:05:50.761 Calling clear_accel_subsystem 00:05:50.761 Calling clear_vmd_subsystem 00:05:50.761 Calling clear_sock_subsystem 00:05:50.761 Calling clear_iobuf_subsystem 00:05:51.020 18:21:58 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:51.020 18:21:58 -- json_config/json_config.sh@396 -- # count=100 00:05:51.020 18:21:58 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:51.020 18:21:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:51.020 18:21:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.020 18:21:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.279 18:21:58 -- json_config/json_config.sh@398 -- # break 00:05:51.279 18:21:58 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:51.279 18:21:58 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:51.279 18:21:58 -- json_config/json_config.sh@120 -- # local app=target 00:05:51.279 18:21:58 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:51.279 18:21:58 -- json_config/json_config.sh@124 -- # [[ -n 67951 ]] 00:05:51.279 18:21:58 -- json_config/json_config.sh@127 -- # kill -SIGINT 67951 00:05:51.279 18:21:58 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:51.279 18:21:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:51.279 18:21:58 -- json_config/json_config.sh@130 -- # kill -0 67951 00:05:51.279 18:21:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:51.845 18:21:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:51.845 18:21:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:51.845 18:21:59 -- json_config/json_config.sh@130 -- # kill -0 67951 00:05:51.845 18:21:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:51.845 18:21:59 -- json_config/json_config.sh@132 -- # break 00:05:51.845 18:21:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:51.845 SPDK target shutdown done 00:05:51.845 18:21:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:51.845 INFO: relaunching applications... 00:05:51.845 18:21:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:51.845 18:21:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.845 18:21:59 -- json_config/json_config.sh@98 -- # local app=target 00:05:51.845 18:21:59 -- json_config/json_config.sh@99 -- # shift 00:05:51.845 18:21:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:51.845 18:21:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:51.845 18:21:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:51.845 18:21:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:51.845 18:21:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:51.845 18:21:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=68220 00:05:51.845 18:21:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.845 Waiting for target to run... 00:05:51.845 18:21:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:51.846 18:21:59 -- json_config/json_config.sh@114 -- # waitforlisten 68220 /var/tmp/spdk_tgt.sock 00:05:51.846 18:21:59 -- common/autotest_common.sh@819 -- # '[' -z 68220 ']' 00:05:51.846 18:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.846 18:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.846 18:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.846 18:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.846 18:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:51.846 [2024-07-14 18:21:59.180163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:51.846 [2024-07-14 18:21:59.180334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68220 ] 00:05:52.412 [2024-07-14 18:21:59.617561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.412 [2024-07-14 18:21:59.682996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.412 [2024-07-14 18:21:59.683176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.670 [2024-07-14 18:21:59.985334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.670 [2024-07-14 18:22:00.017411] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.928 18:22:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.928 00:05:52.928 18:22:00 -- common/autotest_common.sh@852 -- # return 0 00:05:52.928 18:22:00 -- json_config/json_config.sh@115 -- # echo '' 00:05:52.928 INFO: Checking if target configuration is the same... 00:05:52.928 18:22:00 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:52.928 18:22:00 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.928 18:22:00 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.928 18:22:00 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:52.928 18:22:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.928 + '[' 2 -ne 2 ']' 00:05:52.928 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:52.928 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:52.928 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:52.928 +++ basename /dev/fd/62 00:05:52.928 ++ mktemp /tmp/62.XXX 00:05:52.928 + tmp_file_1=/tmp/62.c5Z 00:05:52.928 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.928 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.928 + tmp_file_2=/tmp/spdk_tgt_config.json.IqL 00:05:52.928 + ret=0 00:05:52.928 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.186 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.444 + diff -u /tmp/62.c5Z /tmp/spdk_tgt_config.json.IqL 00:05:53.444 INFO: JSON config files are the same 00:05:53.444 + echo 'INFO: JSON config files are the same' 00:05:53.444 + rm /tmp/62.c5Z /tmp/spdk_tgt_config.json.IqL 00:05:53.444 + exit 0 00:05:53.444 18:22:00 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:53.444 INFO: changing configuration and checking if this can be detected... 00:05:53.444 18:22:00 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:53.444 18:22:00 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.444 18:22:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.702 18:22:00 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.702 18:22:00 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:53.702 18:22:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.702 + '[' 2 -ne 2 ']' 00:05:53.702 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:53.702 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:53.702 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:53.702 +++ basename /dev/fd/62 00:05:53.702 ++ mktemp /tmp/62.XXX 00:05:53.702 + tmp_file_1=/tmp/62.izZ 00:05:53.702 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.702 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.702 + tmp_file_2=/tmp/spdk_tgt_config.json.RG3 00:05:53.702 + ret=0 00:05:53.702 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.961 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.221 + diff -u /tmp/62.izZ /tmp/spdk_tgt_config.json.RG3 00:05:54.221 + ret=1 00:05:54.221 + echo '=== Start of file: /tmp/62.izZ ===' 00:05:54.221 + cat /tmp/62.izZ 00:05:54.221 + echo '=== End of file: /tmp/62.izZ ===' 00:05:54.221 + echo '' 00:05:54.221 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RG3 ===' 00:05:54.221 + cat /tmp/spdk_tgt_config.json.RG3 00:05:54.221 + echo '=== End of file: /tmp/spdk_tgt_config.json.RG3 ===' 00:05:54.221 + echo '' 00:05:54.221 + rm /tmp/62.izZ /tmp/spdk_tgt_config.json.RG3 00:05:54.221 + exit 1 00:05:54.221 INFO: configuration change detected. 00:05:54.221 18:22:01 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:54.221 18:22:01 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:54.221 18:22:01 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:54.221 18:22:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:54.221 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.221 18:22:01 -- json_config/json_config.sh@360 -- # local ret=0 00:05:54.221 18:22:01 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:54.221 18:22:01 -- json_config/json_config.sh@370 -- # [[ -n 68220 ]] 00:05:54.221 18:22:01 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:54.221 18:22:01 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:54.221 18:22:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:54.221 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.221 18:22:01 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:54.221 18:22:01 -- json_config/json_config.sh@246 -- # uname -s 00:05:54.221 18:22:01 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:54.221 18:22:01 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:54.221 18:22:01 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:54.221 18:22:01 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:54.221 18:22:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:54.221 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.221 18:22:01 -- json_config/json_config.sh@376 -- # killprocess 68220 00:05:54.221 18:22:01 -- common/autotest_common.sh@926 -- # '[' -z 68220 ']' 00:05:54.221 18:22:01 -- common/autotest_common.sh@930 -- # kill -0 68220 00:05:54.221 18:22:01 -- common/autotest_common.sh@931 -- # uname 00:05:54.221 18:22:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.221 18:22:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68220 00:05:54.221 18:22:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.221 killing process with pid 68220 00:05:54.221 18:22:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.221 18:22:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68220' 00:05:54.221 18:22:01 -- common/autotest_common.sh@945 -- # kill 68220 00:05:54.221 18:22:01 -- common/autotest_common.sh@950 -- # wait 68220 00:05:54.479 18:22:01 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.479 18:22:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:54.479 18:22:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:54.479 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.479 18:22:01 -- json_config/json_config.sh@381 -- # return 0 00:05:54.479 INFO: Success 00:05:54.479 18:22:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:54.479 ************************************ 00:05:54.479 END TEST json_config 00:05:54.479 ************************************ 00:05:54.479 00:05:54.479 real 0m8.462s 00:05:54.479 user 0m12.143s 00:05:54.479 sys 0m1.921s 00:05:54.479 18:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.479 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.479 18:22:01 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:54.479 18:22:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.479 18:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.479 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.479 ************************************ 00:05:54.479 START TEST json_config_extra_key 00:05:54.479 ************************************ 00:05:54.479 18:22:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:54.479 18:22:01 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:54.479 18:22:01 -- nvmf/common.sh@7 -- # uname -s 00:05:54.479 18:22:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.479 18:22:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.479 18:22:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.479 18:22:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.479 18:22:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.479 18:22:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.479 18:22:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.479 18:22:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.479 18:22:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.479 18:22:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.479 18:22:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:05:54.479 18:22:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:05:54.479 18:22:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.479 18:22:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.479 18:22:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.737 18:22:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.737 18:22:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.737 18:22:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.737 18:22:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.737 18:22:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.737 18:22:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.737 18:22:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.737 18:22:01 -- paths/export.sh@5 -- # export PATH 00:05:54.737 18:22:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.737 18:22:01 -- nvmf/common.sh@46 -- # : 0 00:05:54.737 18:22:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:54.737 18:22:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:54.737 18:22:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:54.737 18:22:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.737 18:22:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.737 18:22:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:54.737 18:22:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:54.737 18:22:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.737 INFO: launching applications... 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:54.737 18:22:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:54.738 18:22:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68395 00:05:54.738 Waiting for target to run... 00:05:54.738 18:22:01 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:54.738 18:22:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:54.738 18:22:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68395 /var/tmp/spdk_tgt.sock 00:05:54.738 18:22:01 -- common/autotest_common.sh@819 -- # '[' -z 68395 ']' 00:05:54.738 18:22:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.738 18:22:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.738 18:22:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.738 18:22:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.738 18:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.738 [2024-07-14 18:22:01.971837] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.738 [2024-07-14 18:22:01.971943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68395 ] 00:05:54.995 [2024-07-14 18:22:02.413440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.254 [2024-07-14 18:22:02.483677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.254 [2024-07-14 18:22:02.483882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.820 00:05:55.820 INFO: shutting down applications... 00:05:55.820 18:22:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.820 18:22:02 -- common/autotest_common.sh@852 -- # return 0 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68395 ]] 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68395 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68395 00:05:55.820 18:22:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68395 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:56.078 SPDK target shutdown done 00:05:56.078 Success 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:56.078 18:22:03 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:56.078 ************************************ 00:05:56.078 END TEST json_config_extra_key 00:05:56.078 ************************************ 00:05:56.078 00:05:56.078 real 0m1.659s 00:05:56.078 user 0m1.596s 00:05:56.078 sys 0m0.457s 00:05:56.078 18:22:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.078 18:22:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.336 18:22:03 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.336 18:22:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.336 18:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.336 18:22:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.336 ************************************ 00:05:56.336 START TEST alias_rpc 00:05:56.336 ************************************ 00:05:56.336 18:22:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.336 * Looking for test storage... 00:05:56.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:56.336 18:22:03 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.336 18:22:03 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68476 00:05:56.336 18:22:03 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.336 18:22:03 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68476 00:05:56.336 18:22:03 -- common/autotest_common.sh@819 -- # '[' -z 68476 ']' 00:05:56.336 18:22:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.336 18:22:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.336 18:22:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.336 18:22:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.336 18:22:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.336 [2024-07-14 18:22:03.684298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.336 [2024-07-14 18:22:03.684704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68476 ] 00:05:56.594 [2024-07-14 18:22:03.825894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.594 [2024-07-14 18:22:03.908582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.594 [2024-07-14 18:22:03.909008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.530 18:22:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.530 18:22:04 -- common/autotest_common.sh@852 -- # return 0 00:05:57.530 18:22:04 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:57.789 18:22:04 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68476 00:05:57.789 18:22:04 -- common/autotest_common.sh@926 -- # '[' -z 68476 ']' 00:05:57.789 18:22:04 -- common/autotest_common.sh@930 -- # kill -0 68476 00:05:57.789 18:22:04 -- common/autotest_common.sh@931 -- # uname 00:05:57.789 18:22:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.789 18:22:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68476 00:05:57.789 killing process with pid 68476 00:05:57.789 18:22:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.789 18:22:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.789 18:22:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68476' 00:05:57.789 18:22:04 -- common/autotest_common.sh@945 -- # kill 68476 00:05:57.789 18:22:04 -- common/autotest_common.sh@950 -- # wait 68476 00:05:58.047 ************************************ 00:05:58.047 END TEST alias_rpc 00:05:58.047 ************************************ 00:05:58.047 00:05:58.047 real 0m1.828s 00:05:58.047 user 0m2.074s 00:05:58.047 sys 0m0.467s 00:05:58.047 18:22:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.047 18:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 18:22:05 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:05:58.047 18:22:05 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.047 18:22:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.047 18:22:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.047 18:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 ************************************ 00:05:58.047 START TEST dpdk_mem_utility 00:05:58.047 ************************************ 00:05:58.047 18:22:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.306 * Looking for test storage... 00:05:58.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:58.306 18:22:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.306 18:22:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68567 00:05:58.306 18:22:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.306 18:22:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68567 00:05:58.306 18:22:05 -- common/autotest_common.sh@819 -- # '[' -z 68567 ']' 00:05:58.306 18:22:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.306 18:22:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.306 18:22:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.306 18:22:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.306 18:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.306 [2024-07-14 18:22:05.557710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.306 [2024-07-14 18:22:05.557823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68567 ] 00:05:58.306 [2024-07-14 18:22:05.698076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.564 [2024-07-14 18:22:05.778824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.564 [2024-07-14 18:22:05.779054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.131 18:22:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.131 18:22:06 -- common/autotest_common.sh@852 -- # return 0 00:05:59.132 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.132 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.132 18:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.132 18:22:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.132 { 00:05:59.132 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.132 } 00:05:59.132 18:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.132 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.392 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:59.392 1 heaps totaling size 814.000000 MiB 00:05:59.392 size: 814.000000 MiB heap id: 0 00:05:59.392 end heaps---------- 00:05:59.392 8 mempools totaling size 598.116089 MiB 00:05:59.392 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.392 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.392 size: 84.521057 MiB name: bdev_io_68567 00:05:59.392 size: 51.011292 MiB name: evtpool_68567 00:05:59.392 size: 50.003479 MiB name: msgpool_68567 00:05:59.393 size: 21.763794 MiB name: PDU_Pool 00:05:59.393 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.393 size: 0.026123 MiB name: Session_Pool 00:05:59.393 end mempools------- 00:05:59.393 6 memzones totaling size 4.142822 MiB 00:05:59.393 size: 1.000366 MiB name: RG_ring_0_68567 00:05:59.393 size: 1.000366 MiB name: RG_ring_1_68567 00:05:59.393 size: 1.000366 MiB name: RG_ring_4_68567 00:05:59.393 size: 1.000366 MiB name: RG_ring_5_68567 00:05:59.393 size: 0.125366 MiB name: RG_ring_2_68567 00:05:59.393 size: 0.015991 MiB name: RG_ring_3_68567 00:05:59.393 end memzones------- 00:05:59.393 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.393 heap id: 0 total size: 814.000000 MiB number of busy elements: 210 number of free elements: 15 00:05:59.393 list of free elements. size: 12.488403 MiB 00:05:59.393 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:59.393 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:59.393 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:59.393 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:59.393 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:59.393 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:59.393 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:59.393 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:59.393 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:59.393 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:59.393 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:59.393 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:59.393 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:59.393 element at address: 0x200027e00000 with size: 0.399048 MiB 00:05:59.393 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:59.393 list of standard malloc elements. size: 199.249023 MiB 00:05:59.393 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:59.393 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:59.393 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:59.393 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:59.393 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:59.393 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:59.393 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:59.393 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:59.393 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:59.393 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:59.393 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:59.394 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e66280 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e66340 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6cf40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:59.394 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:59.394 list of memzone associated elements. size: 602.262573 MiB 00:05:59.394 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:59.394 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.394 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:59.394 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.394 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:59.394 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68567_0 00:05:59.394 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:59.394 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68567_0 00:05:59.394 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:59.394 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68567_0 00:05:59.394 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:59.394 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.394 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:59.394 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.394 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:59.394 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68567 00:05:59.394 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:59.394 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68567 00:05:59.394 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:59.394 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68567 00:05:59.394 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:59.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.394 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:59.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.394 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:59.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.394 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:59.394 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.394 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:59.394 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68567 00:05:59.394 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:59.394 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68567 00:05:59.394 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:59.394 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68567 00:05:59.394 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:59.394 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68567 00:05:59.394 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:59.394 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68567 00:05:59.394 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:59.394 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.394 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:59.394 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.394 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:59.394 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.394 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:59.394 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68567 00:05:59.394 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:59.394 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.394 element at address: 0x200027e66400 with size: 0.023743 MiB 00:05:59.394 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.394 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:59.394 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68567 00:05:59.394 element at address: 0x200027e6c540 with size: 0.002441 MiB 00:05:59.394 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.394 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:59.394 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68567 00:05:59.394 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:59.394 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68567 00:05:59.394 element at address: 0x200027e6d000 with size: 0.000305 MiB 00:05:59.394 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.394 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.394 18:22:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68567 00:05:59.394 18:22:06 -- common/autotest_common.sh@926 -- # '[' -z 68567 ']' 00:05:59.394 18:22:06 -- common/autotest_common.sh@930 -- # kill -0 68567 00:05:59.394 18:22:06 -- common/autotest_common.sh@931 -- # uname 00:05:59.394 18:22:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.394 18:22:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68567 00:05:59.394 18:22:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.394 18:22:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.394 18:22:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68567' 00:05:59.394 killing process with pid 68567 00:05:59.394 18:22:06 -- common/autotest_common.sh@945 -- # kill 68567 00:05:59.394 18:22:06 -- common/autotest_common.sh@950 -- # wait 68567 00:05:59.962 00:05:59.962 real 0m1.658s 00:05:59.962 user 0m1.796s 00:05:59.962 sys 0m0.422s 00:05:59.962 18:22:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.962 18:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.962 ************************************ 00:05:59.962 END TEST dpdk_mem_utility 00:05:59.962 ************************************ 00:05:59.962 18:22:07 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.962 18:22:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.962 18:22:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.962 18:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.962 ************************************ 00:05:59.962 START TEST event 00:05:59.962 ************************************ 00:05:59.962 18:22:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.962 * Looking for test storage... 00:05:59.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.962 18:22:07 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:59.962 18:22:07 -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.962 18:22:07 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.962 18:22:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:59.962 18:22:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.962 18:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.962 ************************************ 00:05:59.962 START TEST event_perf 00:05:59.962 ************************************ 00:05:59.962 18:22:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.962 Running I/O for 1 seconds...[2024-07-14 18:22:07.237158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.962 [2024-07-14 18:22:07.237248] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68650 ] 00:05:59.962 [2024-07-14 18:22:07.377702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.220 [2024-07-14 18:22:07.465425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.220 [2024-07-14 18:22:07.465576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.220 [2024-07-14 18:22:07.465686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.220 Running I/O for 1 seconds...[2024-07-14 18:22:07.465684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.154 00:06:01.154 lcore 0: 194351 00:06:01.154 lcore 1: 194354 00:06:01.154 lcore 2: 194355 00:06:01.154 lcore 3: 194357 00:06:01.154 done. 00:06:01.154 00:06:01.154 real 0m1.326s 00:06:01.154 user 0m4.129s 00:06:01.154 sys 0m0.072s 00:06:01.154 18:22:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.154 18:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:01.154 ************************************ 00:06:01.154 END TEST event_perf 00:06:01.154 ************************************ 00:06:01.413 18:22:08 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.413 18:22:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:01.413 18:22:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.413 18:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 ************************************ 00:06:01.413 START TEST event_reactor 00:06:01.413 ************************************ 00:06:01.413 18:22:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.413 [2024-07-14 18:22:08.607360] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:01.413 [2024-07-14 18:22:08.607444] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68694 ] 00:06:01.413 [2024-07-14 18:22:08.742070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.671 [2024-07-14 18:22:08.840832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.606 test_start 00:06:02.606 oneshot 00:06:02.606 tick 100 00:06:02.606 tick 100 00:06:02.606 tick 250 00:06:02.606 tick 100 00:06:02.606 tick 100 00:06:02.606 tick 250 00:06:02.606 tick 500 00:06:02.606 tick 100 00:06:02.606 tick 100 00:06:02.606 tick 100 00:06:02.606 tick 250 00:06:02.606 tick 100 00:06:02.606 tick 100 00:06:02.606 test_end 00:06:02.606 00:06:02.606 real 0m1.323s 00:06:02.606 user 0m1.168s 00:06:02.606 sys 0m0.049s 00:06:02.606 18:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.606 ************************************ 00:06:02.606 END TEST event_reactor 00:06:02.606 ************************************ 00:06:02.606 18:22:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.606 18:22:09 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.606 18:22:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:02.606 18:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.606 18:22:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.606 ************************************ 00:06:02.606 START TEST event_reactor_perf 00:06:02.606 ************************************ 00:06:02.606 18:22:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.606 [2024-07-14 18:22:09.985277] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.606 [2024-07-14 18:22:09.985398] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68724 ] 00:06:02.865 [2024-07-14 18:22:10.126206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.865 [2024-07-14 18:22:10.211302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.240 test_start 00:06:04.240 test_end 00:06:04.240 Performance: 409062 events per second 00:06:04.240 00:06:04.240 real 0m1.316s 00:06:04.240 user 0m1.154s 00:06:04.240 sys 0m0.055s 00:06:04.240 18:22:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.240 18:22:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.240 ************************************ 00:06:04.240 END TEST event_reactor_perf 00:06:04.240 ************************************ 00:06:04.240 18:22:11 -- event/event.sh@49 -- # uname -s 00:06:04.240 18:22:11 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.240 18:22:11 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.240 18:22:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.240 18:22:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.240 18:22:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.240 ************************************ 00:06:04.240 START TEST event_scheduler 00:06:04.240 ************************************ 00:06:04.240 18:22:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.240 * Looking for test storage... 00:06:04.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:04.240 18:22:11 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.240 18:22:11 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68785 00:06:04.240 18:22:11 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.240 18:22:11 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.240 18:22:11 -- scheduler/scheduler.sh@37 -- # waitforlisten 68785 00:06:04.240 18:22:11 -- common/autotest_common.sh@819 -- # '[' -z 68785 ']' 00:06:04.240 18:22:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.240 18:22:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.240 18:22:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.240 18:22:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.240 18:22:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.240 [2024-07-14 18:22:11.458745] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.240 [2024-07-14 18:22:11.458821] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68785 ] 00:06:04.240 [2024-07-14 18:22:11.594804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.539 [2024-07-14 18:22:11.687649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.539 [2024-07-14 18:22:11.687815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.539 [2024-07-14 18:22:11.687924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.539 [2024-07-14 18:22:11.687925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.105 18:22:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.105 18:22:12 -- common/autotest_common.sh@852 -- # return 0 00:06:05.105 18:22:12 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.105 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.105 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.105 POWER: Env isn't set yet! 00:06:05.105 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:05.105 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.105 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.105 POWER: Attempting to initialise PSTAT power management... 00:06:05.105 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.105 POWER: Cannot set governor of lcore 0 to performance 00:06:05.105 POWER: Attempting to initialise AMD PSTATE power management... 00:06:05.105 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.105 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.105 POWER: Attempting to initialise CPPC power management... 00:06:05.105 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.105 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.105 POWER: Attempting to initialise VM power management... 00:06:05.105 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:05.105 POWER: Unable to set Power Management Environment for lcore 0 00:06:05.105 [2024-07-14 18:22:12.443362] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:05.105 [2024-07-14 18:22:12.443376] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:05.105 [2024-07-14 18:22:12.443384] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.105 [2024-07-14 18:22:12.443397] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.105 [2024-07-14 18:22:12.443404] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.105 [2024-07-14 18:22:12.443410] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.105 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.105 18:22:12 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.105 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.105 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 [2024-07-14 18:22:12.532086] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.365 18:22:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.365 18:22:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 ************************************ 00:06:05.365 START TEST scheduler_create_thread 00:06:05.365 ************************************ 00:06:05.365 18:22:12 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 2 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 3 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 4 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 5 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 6 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 7 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 8 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 9 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 10 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 18:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.365 18:22:12 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.365 18:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.365 18:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.744 18:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.744 18:22:14 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.744 18:22:14 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.744 18:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:06.744 18:22:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.119 18:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.119 00:06:08.119 real 0m2.615s 00:06:08.119 user 0m0.021s 00:06:08.119 sys 0m0.004s 00:06:08.119 18:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.119 ************************************ 00:06:08.119 END TEST scheduler_create_thread 00:06:08.119 ************************************ 00:06:08.119 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.119 18:22:15 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.119 18:22:15 -- scheduler/scheduler.sh@46 -- # killprocess 68785 00:06:08.119 18:22:15 -- common/autotest_common.sh@926 -- # '[' -z 68785 ']' 00:06:08.119 18:22:15 -- common/autotest_common.sh@930 -- # kill -0 68785 00:06:08.119 18:22:15 -- common/autotest_common.sh@931 -- # uname 00:06:08.119 18:22:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.119 18:22:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68785 00:06:08.119 killing process with pid 68785 00:06:08.119 18:22:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:08.119 18:22:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:08.119 18:22:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68785' 00:06:08.119 18:22:15 -- common/autotest_common.sh@945 -- # kill 68785 00:06:08.119 18:22:15 -- common/autotest_common.sh@950 -- # wait 68785 00:06:08.378 [2024-07-14 18:22:15.640065] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.637 ************************************ 00:06:08.637 END TEST event_scheduler 00:06:08.637 ************************************ 00:06:08.637 00:06:08.637 real 0m4.514s 00:06:08.637 user 0m8.617s 00:06:08.637 sys 0m0.369s 00:06:08.637 18:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.637 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.637 18:22:15 -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.637 18:22:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.637 18:22:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.637 18:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.637 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.637 ************************************ 00:06:08.637 START TEST app_repeat 00:06:08.637 ************************************ 00:06:08.637 18:22:15 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:08.637 18:22:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.637 18:22:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.637 18:22:15 -- event/event.sh@13 -- # local nbd_list 00:06:08.637 18:22:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.637 18:22:15 -- event/event.sh@14 -- # local bdev_list 00:06:08.637 18:22:15 -- event/event.sh@15 -- # local repeat_times=4 00:06:08.637 18:22:15 -- event/event.sh@17 -- # modprobe nbd 00:06:08.637 18:22:15 -- event/event.sh@19 -- # repeat_pid=68902 00:06:08.637 Process app_repeat pid: 68902 00:06:08.637 18:22:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.637 18:22:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.637 18:22:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68902' 00:06:08.637 18:22:15 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.637 spdk_app_start Round 0 00:06:08.637 18:22:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.637 18:22:15 -- event/event.sh@25 -- # waitforlisten 68902 /var/tmp/spdk-nbd.sock 00:06:08.637 18:22:15 -- common/autotest_common.sh@819 -- # '[' -z 68902 ']' 00:06:08.637 18:22:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.637 18:22:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.637 18:22:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.637 18:22:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.637 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.637 [2024-07-14 18:22:15.930038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.637 [2024-07-14 18:22:15.930122] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68902 ] 00:06:08.896 [2024-07-14 18:22:16.063149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.896 [2024-07-14 18:22:16.139031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.896 [2024-07-14 18:22:16.139039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.832 18:22:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.832 18:22:16 -- common/autotest_common.sh@852 -- # return 0 00:06:09.832 18:22:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.832 Malloc0 00:06:09.832 18:22:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.091 Malloc1 00:06:10.091 18:22:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.091 18:22:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.349 /dev/nbd0 00:06:10.349 18:22:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.349 18:22:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.349 18:22:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:10.349 18:22:17 -- common/autotest_common.sh@857 -- # local i 00:06:10.349 18:22:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.349 18:22:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.349 18:22:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:10.349 18:22:17 -- common/autotest_common.sh@861 -- # break 00:06:10.349 18:22:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.349 18:22:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.349 18:22:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.349 1+0 records in 00:06:10.349 1+0 records out 00:06:10.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246641 s, 16.6 MB/s 00:06:10.349 18:22:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.349 18:22:17 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.349 18:22:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.349 18:22:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:10.349 18:22:17 -- common/autotest_common.sh@877 -- # return 0 00:06:10.349 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.349 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.349 18:22:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.608 /dev/nbd1 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.608 18:22:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:10.608 18:22:17 -- common/autotest_common.sh@857 -- # local i 00:06:10.608 18:22:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.608 18:22:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.608 18:22:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:10.608 18:22:17 -- common/autotest_common.sh@861 -- # break 00:06:10.608 18:22:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.608 18:22:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.608 18:22:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.608 1+0 records in 00:06:10.608 1+0 records out 00:06:10.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262555 s, 15.6 MB/s 00:06:10.608 18:22:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.608 18:22:17 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.608 18:22:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.608 18:22:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:10.608 18:22:17 -- common/autotest_common.sh@877 -- # return 0 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.608 18:22:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.867 { 00:06:10.867 "bdev_name": "Malloc0", 00:06:10.867 "nbd_device": "/dev/nbd0" 00:06:10.867 }, 00:06:10.867 { 00:06:10.867 "bdev_name": "Malloc1", 00:06:10.867 "nbd_device": "/dev/nbd1" 00:06:10.867 } 00:06:10.867 ]' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.867 { 00:06:10.867 "bdev_name": "Malloc0", 00:06:10.867 "nbd_device": "/dev/nbd0" 00:06:10.867 }, 00:06:10.867 { 00:06:10.867 "bdev_name": "Malloc1", 00:06:10.867 "nbd_device": "/dev/nbd1" 00:06:10.867 } 00:06:10.867 ]' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.867 /dev/nbd1' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.867 /dev/nbd1' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.867 256+0 records in 00:06:10.867 256+0 records out 00:06:10.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108276 s, 96.8 MB/s 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.867 18:22:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.126 256+0 records in 00:06:11.126 256+0 records out 00:06:11.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255978 s, 41.0 MB/s 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.126 256+0 records in 00:06:11.126 256+0 records out 00:06:11.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271455 s, 38.6 MB/s 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.126 18:22:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@41 -- # break 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.385 18:22:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@41 -- # break 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.644 18:22:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@65 -- # true 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.902 18:22:19 -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.902 18:22:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.163 18:22:19 -- event/event.sh@35 -- # sleep 3 00:06:12.421 [2024-07-14 18:22:19.677571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.421 [2024-07-14 18:22:19.741388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.421 [2024-07-14 18:22:19.741410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.421 [2024-07-14 18:22:19.798162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.421 [2024-07-14 18:22:19.798236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.714 18:22:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.714 spdk_app_start Round 1 00:06:15.714 18:22:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.714 18:22:22 -- event/event.sh@25 -- # waitforlisten 68902 /var/tmp/spdk-nbd.sock 00:06:15.714 18:22:22 -- common/autotest_common.sh@819 -- # '[' -z 68902 ']' 00:06:15.714 18:22:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.714 18:22:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.714 18:22:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.714 18:22:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.714 18:22:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.714 18:22:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.714 18:22:22 -- common/autotest_common.sh@852 -- # return 0 00:06:15.714 18:22:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.714 Malloc0 00:06:15.714 18:22:23 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.972 Malloc1 00:06:15.972 18:22:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.972 18:22:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.229 /dev/nbd0 00:06:16.229 18:22:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.229 18:22:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.229 18:22:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:16.229 18:22:23 -- common/autotest_common.sh@857 -- # local i 00:06:16.229 18:22:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.229 18:22:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.229 18:22:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:16.229 18:22:23 -- common/autotest_common.sh@861 -- # break 00:06:16.229 18:22:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.230 18:22:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.230 18:22:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.230 1+0 records in 00:06:16.230 1+0 records out 00:06:16.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164757 s, 24.9 MB/s 00:06:16.230 18:22:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.230 18:22:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.230 18:22:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.230 18:22:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.230 18:22:23 -- common/autotest_common.sh@877 -- # return 0 00:06:16.230 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.230 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.230 18:22:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.488 /dev/nbd1 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.488 18:22:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:16.488 18:22:23 -- common/autotest_common.sh@857 -- # local i 00:06:16.488 18:22:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.488 18:22:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.488 18:22:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:16.488 18:22:23 -- common/autotest_common.sh@861 -- # break 00:06:16.488 18:22:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.488 18:22:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.488 18:22:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.488 1+0 records in 00:06:16.488 1+0 records out 00:06:16.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280234 s, 14.6 MB/s 00:06:16.488 18:22:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.488 18:22:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.488 18:22:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.488 18:22:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.488 18:22:23 -- common/autotest_common.sh@877 -- # return 0 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.488 18:22:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.746 { 00:06:16.746 "bdev_name": "Malloc0", 00:06:16.746 "nbd_device": "/dev/nbd0" 00:06:16.746 }, 00:06:16.746 { 00:06:16.746 "bdev_name": "Malloc1", 00:06:16.746 "nbd_device": "/dev/nbd1" 00:06:16.746 } 00:06:16.746 ]' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.746 { 00:06:16.746 "bdev_name": "Malloc0", 00:06:16.746 "nbd_device": "/dev/nbd0" 00:06:16.746 }, 00:06:16.746 { 00:06:16.746 "bdev_name": "Malloc1", 00:06:16.746 "nbd_device": "/dev/nbd1" 00:06:16.746 } 00:06:16.746 ]' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.746 /dev/nbd1' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.746 /dev/nbd1' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.746 18:22:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.746 256+0 records in 00:06:16.746 256+0 records out 00:06:16.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574623 s, 182 MB/s 00:06:16.747 18:22:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.005 256+0 records in 00:06:17.005 256+0 records out 00:06:17.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246008 s, 42.6 MB/s 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.005 256+0 records in 00:06:17.005 256+0 records out 00:06:17.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263542 s, 39.8 MB/s 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.005 18:22:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@41 -- # break 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.263 18:22:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@41 -- # break 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.521 18:22:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@65 -- # true 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.779 18:22:24 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.779 18:22:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.038 18:22:25 -- event/event.sh@35 -- # sleep 3 00:06:18.038 [2024-07-14 18:22:25.421018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.296 [2024-07-14 18:22:25.486952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.296 [2024-07-14 18:22:25.486965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.296 [2024-07-14 18:22:25.544415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.297 [2024-07-14 18:22:25.544482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.828 18:22:28 -- event/event.sh@23 -- # for i in {0..2} 00:06:20.828 spdk_app_start Round 2 00:06:20.828 18:22:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.828 18:22:28 -- event/event.sh@25 -- # waitforlisten 68902 /var/tmp/spdk-nbd.sock 00:06:20.828 18:22:28 -- common/autotest_common.sh@819 -- # '[' -z 68902 ']' 00:06:20.828 18:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.828 18:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.828 18:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.828 18:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.828 18:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 18:22:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.087 18:22:28 -- common/autotest_common.sh@852 -- # return 0 00:06:21.087 18:22:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.346 Malloc0 00:06:21.346 18:22:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.604 Malloc1 00:06:21.604 18:22:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.604 18:22:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.863 /dev/nbd0 00:06:21.863 18:22:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.863 18:22:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.863 18:22:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:21.863 18:22:29 -- common/autotest_common.sh@857 -- # local i 00:06:21.863 18:22:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:21.864 18:22:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:21.864 18:22:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:21.864 18:22:29 -- common/autotest_common.sh@861 -- # break 00:06:21.864 18:22:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:21.864 18:22:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:21.864 18:22:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.864 1+0 records in 00:06:21.864 1+0 records out 00:06:21.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229085 s, 17.9 MB/s 00:06:21.864 18:22:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.864 18:22:29 -- common/autotest_common.sh@874 -- # size=4096 00:06:21.864 18:22:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.864 18:22:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:21.864 18:22:29 -- common/autotest_common.sh@877 -- # return 0 00:06:21.864 18:22:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.864 18:22:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.864 18:22:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.122 /dev/nbd1 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.122 18:22:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:22.122 18:22:29 -- common/autotest_common.sh@857 -- # local i 00:06:22.122 18:22:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.122 18:22:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.122 18:22:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:22.122 18:22:29 -- common/autotest_common.sh@861 -- # break 00:06:22.122 18:22:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.122 18:22:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.122 18:22:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.122 1+0 records in 00:06:22.122 1+0 records out 00:06:22.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032967 s, 12.4 MB/s 00:06:22.122 18:22:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.122 18:22:29 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.122 18:22:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.122 18:22:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.122 18:22:29 -- common/autotest_common.sh@877 -- # return 0 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.122 18:22:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.380 18:22:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.380 { 00:06:22.380 "bdev_name": "Malloc0", 00:06:22.380 "nbd_device": "/dev/nbd0" 00:06:22.380 }, 00:06:22.380 { 00:06:22.380 "bdev_name": "Malloc1", 00:06:22.380 "nbd_device": "/dev/nbd1" 00:06:22.380 } 00:06:22.380 ]' 00:06:22.380 18:22:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.380 { 00:06:22.380 "bdev_name": "Malloc0", 00:06:22.380 "nbd_device": "/dev/nbd0" 00:06:22.380 }, 00:06:22.380 { 00:06:22.380 "bdev_name": "Malloc1", 00:06:22.380 "nbd_device": "/dev/nbd1" 00:06:22.380 } 00:06:22.380 ]' 00:06:22.380 18:22:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.638 /dev/nbd1' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.638 /dev/nbd1' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.638 256+0 records in 00:06:22.638 256+0 records out 00:06:22.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00933547 s, 112 MB/s 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.638 256+0 records in 00:06:22.638 256+0 records out 00:06:22.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249701 s, 42.0 MB/s 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.638 256+0 records in 00:06:22.638 256+0 records out 00:06:22.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279607 s, 37.5 MB/s 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@51 -- # local i 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.638 18:22:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@41 -- # break 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.897 18:22:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@41 -- # break 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.155 18:22:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.414 18:22:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.414 18:22:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.414 18:22:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.414 18:22:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@65 -- # true 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.672 18:22:30 -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.672 18:22:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.930 18:22:31 -- event/event.sh@35 -- # sleep 3 00:06:23.930 [2024-07-14 18:22:31.338028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.189 [2024-07-14 18:22:31.411720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.189 [2024-07-14 18:22:31.411730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.189 [2024-07-14 18:22:31.469379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.189 [2024-07-14 18:22:31.469485] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.475 18:22:34 -- event/event.sh@38 -- # waitforlisten 68902 /var/tmp/spdk-nbd.sock 00:06:27.475 18:22:34 -- common/autotest_common.sh@819 -- # '[' -z 68902 ']' 00:06:27.475 18:22:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.475 18:22:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.476 18:22:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.476 18:22:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.476 18:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 18:22:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.476 18:22:34 -- common/autotest_common.sh@852 -- # return 0 00:06:27.476 18:22:34 -- event/event.sh@39 -- # killprocess 68902 00:06:27.476 18:22:34 -- common/autotest_common.sh@926 -- # '[' -z 68902 ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@930 -- # kill -0 68902 00:06:27.476 18:22:34 -- common/autotest_common.sh@931 -- # uname 00:06:27.476 18:22:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68902 00:06:27.476 18:22:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.476 killing process with pid 68902 00:06:27.476 18:22:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68902' 00:06:27.476 18:22:34 -- common/autotest_common.sh@945 -- # kill 68902 00:06:27.476 18:22:34 -- common/autotest_common.sh@950 -- # wait 68902 00:06:27.476 spdk_app_start is called in Round 0. 00:06:27.476 Shutdown signal received, stop current app iteration 00:06:27.476 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:27.476 spdk_app_start is called in Round 1. 00:06:27.476 Shutdown signal received, stop current app iteration 00:06:27.476 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:27.476 spdk_app_start is called in Round 2. 00:06:27.476 Shutdown signal received, stop current app iteration 00:06:27.476 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:27.476 spdk_app_start is called in Round 3. 00:06:27.476 Shutdown signal received, stop current app iteration 00:06:27.476 18:22:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.476 18:22:34 -- event/event.sh@42 -- # return 0 00:06:27.476 00:06:27.476 real 0m18.758s 00:06:27.476 user 0m41.920s 00:06:27.476 sys 0m3.045s 00:06:27.476 18:22:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.476 18:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 ************************************ 00:06:27.476 END TEST app_repeat 00:06:27.476 ************************************ 00:06:27.476 18:22:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.476 18:22:34 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.476 18:22:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.476 18:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 ************************************ 00:06:27.476 START TEST cpu_locks 00:06:27.476 ************************************ 00:06:27.476 18:22:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.476 * Looking for test storage... 00:06:27.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.476 18:22:34 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.476 18:22:34 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.476 18:22:34 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.476 18:22:34 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.476 18:22:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.476 18:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 ************************************ 00:06:27.476 START TEST default_locks 00:06:27.476 ************************************ 00:06:27.476 18:22:34 -- common/autotest_common.sh@1104 -- # default_locks 00:06:27.476 18:22:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69525 00:06:27.476 18:22:34 -- event/cpu_locks.sh@47 -- # waitforlisten 69525 00:06:27.476 18:22:34 -- common/autotest_common.sh@819 -- # '[' -z 69525 ']' 00:06:27.476 18:22:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.476 18:22:34 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.476 18:22:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.476 18:22:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.476 18:22:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.476 18:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 [2024-07-14 18:22:34.878721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:27.476 [2024-07-14 18:22:34.878858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69525 ] 00:06:27.736 [2024-07-14 18:22:35.020462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.736 [2024-07-14 18:22:35.119298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.736 [2024-07-14 18:22:35.119504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.673 18:22:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.673 18:22:35 -- common/autotest_common.sh@852 -- # return 0 00:06:28.673 18:22:35 -- event/cpu_locks.sh@49 -- # locks_exist 69525 00:06:28.673 18:22:35 -- event/cpu_locks.sh@22 -- # lslocks -p 69525 00:06:28.673 18:22:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.934 18:22:36 -- event/cpu_locks.sh@50 -- # killprocess 69525 00:06:28.934 18:22:36 -- common/autotest_common.sh@926 -- # '[' -z 69525 ']' 00:06:28.934 18:22:36 -- common/autotest_common.sh@930 -- # kill -0 69525 00:06:28.934 18:22:36 -- common/autotest_common.sh@931 -- # uname 00:06:28.934 18:22:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.934 18:22:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69525 00:06:28.934 18:22:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.934 18:22:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.934 18:22:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69525' 00:06:28.934 killing process with pid 69525 00:06:28.934 18:22:36 -- common/autotest_common.sh@945 -- # kill 69525 00:06:28.934 18:22:36 -- common/autotest_common.sh@950 -- # wait 69525 00:06:29.193 18:22:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69525 00:06:29.193 18:22:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.193 18:22:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69525 00:06:29.193 18:22:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:29.193 18:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.193 18:22:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:29.193 18:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.193 18:22:36 -- common/autotest_common.sh@643 -- # waitforlisten 69525 00:06:29.193 18:22:36 -- common/autotest_common.sh@819 -- # '[' -z 69525 ']' 00:06:29.193 18:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.193 18:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.193 18:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.193 18:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.193 18:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.193 ERROR: process (pid: 69525) is no longer running 00:06:29.193 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69525) - No such process 00:06:29.193 18:22:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.193 18:22:36 -- common/autotest_common.sh@852 -- # return 1 00:06:29.193 18:22:36 -- common/autotest_common.sh@643 -- # es=1 00:06:29.193 18:22:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.193 18:22:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.193 18:22:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.193 18:22:36 -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.193 18:22:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.193 18:22:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.193 18:22:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.193 00:06:29.193 real 0m1.779s 00:06:29.193 user 0m1.844s 00:06:29.193 sys 0m0.584s 00:06:29.193 18:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.193 18:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.193 ************************************ 00:06:29.193 END TEST default_locks 00:06:29.193 ************************************ 00:06:29.452 18:22:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.452 18:22:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.452 18:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.452 18:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.452 ************************************ 00:06:29.452 START TEST default_locks_via_rpc 00:06:29.452 ************************************ 00:06:29.452 18:22:36 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:29.452 18:22:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69584 00:06:29.452 18:22:36 -- event/cpu_locks.sh@63 -- # waitforlisten 69584 00:06:29.452 18:22:36 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.452 18:22:36 -- common/autotest_common.sh@819 -- # '[' -z 69584 ']' 00:06:29.452 18:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.452 18:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.452 18:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.452 18:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.452 18:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.452 [2024-07-14 18:22:36.689900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.452 [2024-07-14 18:22:36.689996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:06:29.452 [2024-07-14 18:22:36.822915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.711 [2024-07-14 18:22:36.916094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.711 [2024-07-14 18:22:36.916319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.278 18:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.278 18:22:37 -- common/autotest_common.sh@852 -- # return 0 00:06:30.278 18:22:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.278 18:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:30.278 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.537 18:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:30.537 18:22:37 -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.537 18:22:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.537 18:22:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.537 18:22:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.537 18:22:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.537 18:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:30.537 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.537 18:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:30.537 18:22:37 -- event/cpu_locks.sh@71 -- # locks_exist 69584 00:06:30.537 18:22:37 -- event/cpu_locks.sh@22 -- # lslocks -p 69584 00:06:30.537 18:22:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.796 18:22:38 -- event/cpu_locks.sh@73 -- # killprocess 69584 00:06:30.796 18:22:38 -- common/autotest_common.sh@926 -- # '[' -z 69584 ']' 00:06:30.796 18:22:38 -- common/autotest_common.sh@930 -- # kill -0 69584 00:06:30.796 18:22:38 -- common/autotest_common.sh@931 -- # uname 00:06:30.796 18:22:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.796 18:22:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69584 00:06:30.796 killing process with pid 69584 00:06:30.796 18:22:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.796 18:22:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.796 18:22:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69584' 00:06:30.796 18:22:38 -- common/autotest_common.sh@945 -- # kill 69584 00:06:30.796 18:22:38 -- common/autotest_common.sh@950 -- # wait 69584 00:06:31.054 00:06:31.054 real 0m1.826s 00:06:31.054 user 0m1.992s 00:06:31.054 sys 0m0.516s 00:06:31.054 18:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.054 18:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.054 ************************************ 00:06:31.054 END TEST default_locks_via_rpc 00:06:31.054 ************************************ 00:06:31.313 18:22:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.313 18:22:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.313 18:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.313 18:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.313 ************************************ 00:06:31.313 START TEST non_locking_app_on_locked_coremask 00:06:31.313 ************************************ 00:06:31.313 18:22:38 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:31.313 18:22:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69648 00:06:31.313 18:22:38 -- event/cpu_locks.sh@81 -- # waitforlisten 69648 /var/tmp/spdk.sock 00:06:31.313 18:22:38 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.313 18:22:38 -- common/autotest_common.sh@819 -- # '[' -z 69648 ']' 00:06:31.313 18:22:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.313 18:22:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.313 18:22:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.313 18:22:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.313 18:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.313 [2024-07-14 18:22:38.589613] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:31.313 [2024-07-14 18:22:38.589767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69648 ] 00:06:31.313 [2024-07-14 18:22:38.731761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.572 [2024-07-14 18:22:38.814424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.572 [2024-07-14 18:22:38.814619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.508 18:22:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.508 18:22:39 -- common/autotest_common.sh@852 -- # return 0 00:06:32.508 18:22:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69676 00:06:32.508 18:22:39 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.508 18:22:39 -- event/cpu_locks.sh@85 -- # waitforlisten 69676 /var/tmp/spdk2.sock 00:06:32.508 18:22:39 -- common/autotest_common.sh@819 -- # '[' -z 69676 ']' 00:06:32.508 18:22:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.508 18:22:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.508 18:22:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.508 18:22:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.508 18:22:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.508 [2024-07-14 18:22:39.639090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.508 [2024-07-14 18:22:39.639197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69676 ] 00:06:32.508 [2024-07-14 18:22:39.780416] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.508 [2024-07-14 18:22:39.780470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.767 [2024-07-14 18:22:39.939451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.767 [2024-07-14 18:22:39.939653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.334 18:22:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.334 18:22:40 -- common/autotest_common.sh@852 -- # return 0 00:06:33.334 18:22:40 -- event/cpu_locks.sh@87 -- # locks_exist 69648 00:06:33.334 18:22:40 -- event/cpu_locks.sh@22 -- # lslocks -p 69648 00:06:33.334 18:22:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.271 18:22:41 -- event/cpu_locks.sh@89 -- # killprocess 69648 00:06:34.271 18:22:41 -- common/autotest_common.sh@926 -- # '[' -z 69648 ']' 00:06:34.271 18:22:41 -- common/autotest_common.sh@930 -- # kill -0 69648 00:06:34.271 18:22:41 -- common/autotest_common.sh@931 -- # uname 00:06:34.271 18:22:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.271 18:22:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69648 00:06:34.271 18:22:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.271 killing process with pid 69648 00:06:34.271 18:22:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.271 18:22:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69648' 00:06:34.271 18:22:41 -- common/autotest_common.sh@945 -- # kill 69648 00:06:34.271 18:22:41 -- common/autotest_common.sh@950 -- # wait 69648 00:06:34.838 18:22:42 -- event/cpu_locks.sh@90 -- # killprocess 69676 00:06:34.838 18:22:42 -- common/autotest_common.sh@926 -- # '[' -z 69676 ']' 00:06:34.838 18:22:42 -- common/autotest_common.sh@930 -- # kill -0 69676 00:06:34.838 18:22:42 -- common/autotest_common.sh@931 -- # uname 00:06:34.838 18:22:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.097 18:22:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69676 00:06:35.097 18:22:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.097 18:22:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.097 killing process with pid 69676 00:06:35.097 18:22:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69676' 00:06:35.097 18:22:42 -- common/autotest_common.sh@945 -- # kill 69676 00:06:35.097 18:22:42 -- common/autotest_common.sh@950 -- # wait 69676 00:06:35.356 00:06:35.356 real 0m4.151s 00:06:35.356 user 0m4.571s 00:06:35.356 sys 0m1.204s 00:06:35.356 18:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.356 18:22:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.356 ************************************ 00:06:35.356 END TEST non_locking_app_on_locked_coremask 00:06:35.356 ************************************ 00:06:35.356 18:22:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.356 18:22:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.356 18:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.356 18:22:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.356 ************************************ 00:06:35.356 START TEST locking_app_on_unlocked_coremask 00:06:35.356 ************************************ 00:06:35.356 18:22:42 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:35.356 18:22:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69755 00:06:35.356 18:22:42 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.356 18:22:42 -- event/cpu_locks.sh@99 -- # waitforlisten 69755 /var/tmp/spdk.sock 00:06:35.356 18:22:42 -- common/autotest_common.sh@819 -- # '[' -z 69755 ']' 00:06:35.356 18:22:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.356 18:22:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.357 18:22:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.357 18:22:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.357 18:22:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.616 [2024-07-14 18:22:42.789480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.616 [2024-07-14 18:22:42.790412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69755 ] 00:06:35.616 [2024-07-14 18:22:42.931037] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.616 [2024-07-14 18:22:42.931090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.616 [2024-07-14 18:22:43.023763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.616 [2024-07-14 18:22:43.023963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.576 18:22:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.576 18:22:43 -- common/autotest_common.sh@852 -- # return 0 00:06:36.576 18:22:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69783 00:06:36.576 18:22:43 -- event/cpu_locks.sh@103 -- # waitforlisten 69783 /var/tmp/spdk2.sock 00:06:36.576 18:22:43 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.576 18:22:43 -- common/autotest_common.sh@819 -- # '[' -z 69783 ']' 00:06:36.576 18:22:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.576 18:22:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.576 18:22:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.576 18:22:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.576 18:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:36.576 [2024-07-14 18:22:43.819263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:36.576 [2024-07-14 18:22:43.819385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69783 ] 00:06:36.576 [2024-07-14 18:22:43.956529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.834 [2024-07-14 18:22:44.143681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.834 [2024-07-14 18:22:44.143839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.401 18:22:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.401 18:22:44 -- common/autotest_common.sh@852 -- # return 0 00:06:37.401 18:22:44 -- event/cpu_locks.sh@105 -- # locks_exist 69783 00:06:37.401 18:22:44 -- event/cpu_locks.sh@22 -- # lslocks -p 69783 00:06:37.401 18:22:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.335 18:22:45 -- event/cpu_locks.sh@107 -- # killprocess 69755 00:06:38.335 18:22:45 -- common/autotest_common.sh@926 -- # '[' -z 69755 ']' 00:06:38.335 18:22:45 -- common/autotest_common.sh@930 -- # kill -0 69755 00:06:38.335 18:22:45 -- common/autotest_common.sh@931 -- # uname 00:06:38.335 18:22:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.335 18:22:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69755 00:06:38.335 18:22:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.335 18:22:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.335 killing process with pid 69755 00:06:38.335 18:22:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69755' 00:06:38.335 18:22:45 -- common/autotest_common.sh@945 -- # kill 69755 00:06:38.335 18:22:45 -- common/autotest_common.sh@950 -- # wait 69755 00:06:39.270 18:22:46 -- event/cpu_locks.sh@108 -- # killprocess 69783 00:06:39.270 18:22:46 -- common/autotest_common.sh@926 -- # '[' -z 69783 ']' 00:06:39.270 18:22:46 -- common/autotest_common.sh@930 -- # kill -0 69783 00:06:39.270 18:22:46 -- common/autotest_common.sh@931 -- # uname 00:06:39.270 18:22:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.270 18:22:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69783 00:06:39.270 18:22:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.270 18:22:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.270 killing process with pid 69783 00:06:39.270 18:22:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69783' 00:06:39.270 18:22:46 -- common/autotest_common.sh@945 -- # kill 69783 00:06:39.270 18:22:46 -- common/autotest_common.sh@950 -- # wait 69783 00:06:39.609 00:06:39.609 real 0m4.057s 00:06:39.609 user 0m4.452s 00:06:39.609 sys 0m1.176s 00:06:39.609 18:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.609 ************************************ 00:06:39.609 END TEST locking_app_on_unlocked_coremask 00:06:39.609 ************************************ 00:06:39.609 18:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:39.609 18:22:46 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.609 18:22:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.609 18:22:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.609 18:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:39.609 ************************************ 00:06:39.609 START TEST locking_app_on_locked_coremask 00:06:39.609 ************************************ 00:06:39.609 18:22:46 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:39.609 18:22:46 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69864 00:06:39.609 18:22:46 -- event/cpu_locks.sh@116 -- # waitforlisten 69864 /var/tmp/spdk.sock 00:06:39.609 18:22:46 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.609 18:22:46 -- common/autotest_common.sh@819 -- # '[' -z 69864 ']' 00:06:39.609 18:22:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.609 18:22:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.609 18:22:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.609 18:22:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.609 18:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:39.609 [2024-07-14 18:22:46.886203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:39.609 [2024-07-14 18:22:46.886356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69864 ] 00:06:39.609 [2024-07-14 18:22:47.020946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.872 [2024-07-14 18:22:47.111938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.872 [2024-07-14 18:22:47.112091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.806 18:22:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.806 18:22:47 -- common/autotest_common.sh@852 -- # return 0 00:06:40.806 18:22:47 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69892 00:06:40.806 18:22:47 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.806 18:22:47 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69892 /var/tmp/spdk2.sock 00:06:40.806 18:22:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.806 18:22:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69892 /var/tmp/spdk2.sock 00:06:40.806 18:22:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:40.806 18:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.806 18:22:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:40.806 18:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.806 18:22:47 -- common/autotest_common.sh@643 -- # waitforlisten 69892 /var/tmp/spdk2.sock 00:06:40.806 18:22:47 -- common/autotest_common.sh@819 -- # '[' -z 69892 ']' 00:06:40.806 18:22:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.806 18:22:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.806 18:22:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.806 18:22:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.806 18:22:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.806 [2024-07-14 18:22:47.934770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.806 [2024-07-14 18:22:47.935376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69892 ] 00:06:40.806 [2024-07-14 18:22:48.081877] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69864 has claimed it. 00:06:40.806 [2024-07-14 18:22:48.081986] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.372 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69892) - No such process 00:06:41.372 ERROR: process (pid: 69892) is no longer running 00:06:41.372 18:22:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.372 18:22:48 -- common/autotest_common.sh@852 -- # return 1 00:06:41.372 18:22:48 -- common/autotest_common.sh@643 -- # es=1 00:06:41.372 18:22:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.372 18:22:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.372 18:22:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.372 18:22:48 -- event/cpu_locks.sh@122 -- # locks_exist 69864 00:06:41.372 18:22:48 -- event/cpu_locks.sh@22 -- # lslocks -p 69864 00:06:41.372 18:22:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.938 18:22:49 -- event/cpu_locks.sh@124 -- # killprocess 69864 00:06:41.938 18:22:49 -- common/autotest_common.sh@926 -- # '[' -z 69864 ']' 00:06:41.938 18:22:49 -- common/autotest_common.sh@930 -- # kill -0 69864 00:06:41.938 18:22:49 -- common/autotest_common.sh@931 -- # uname 00:06:41.938 18:22:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.938 18:22:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69864 00:06:41.938 18:22:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.938 killing process with pid 69864 00:06:41.938 18:22:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.938 18:22:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69864' 00:06:41.938 18:22:49 -- common/autotest_common.sh@945 -- # kill 69864 00:06:41.938 18:22:49 -- common/autotest_common.sh@950 -- # wait 69864 00:06:42.197 00:06:42.197 real 0m2.646s 00:06:42.197 user 0m3.043s 00:06:42.197 sys 0m0.659s 00:06:42.197 18:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.197 18:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 ************************************ 00:06:42.197 END TEST locking_app_on_locked_coremask 00:06:42.197 ************************************ 00:06:42.197 18:22:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.197 18:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.197 18:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.197 18:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 ************************************ 00:06:42.197 START TEST locking_overlapped_coremask 00:06:42.197 ************************************ 00:06:42.197 18:22:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:42.197 18:22:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69949 00:06:42.197 18:22:49 -- event/cpu_locks.sh@133 -- # waitforlisten 69949 /var/tmp/spdk.sock 00:06:42.197 18:22:49 -- common/autotest_common.sh@819 -- # '[' -z 69949 ']' 00:06:42.197 18:22:49 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.197 18:22:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.197 18:22:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.197 18:22:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.197 18:22:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.197 18:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 [2024-07-14 18:22:49.593775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.197 [2024-07-14 18:22:49.593924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69949 ] 00:06:42.456 [2024-07-14 18:22:49.734665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.457 [2024-07-14 18:22:49.823733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.457 [2024-07-14 18:22:49.824270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.457 [2024-07-14 18:22:49.824380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.457 [2024-07-14 18:22:49.824386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.392 18:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.392 18:22:50 -- common/autotest_common.sh@852 -- # return 0 00:06:43.392 18:22:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69979 00:06:43.392 18:22:50 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.392 18:22:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69979 /var/tmp/spdk2.sock 00:06:43.392 18:22:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.392 18:22:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69979 /var/tmp/spdk2.sock 00:06:43.392 18:22:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:43.392 18:22:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.392 18:22:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:43.392 18:22:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.392 18:22:50 -- common/autotest_common.sh@643 -- # waitforlisten 69979 /var/tmp/spdk2.sock 00:06:43.392 18:22:50 -- common/autotest_common.sh@819 -- # '[' -z 69979 ']' 00:06:43.392 18:22:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.392 18:22:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.392 18:22:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.392 18:22:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.392 18:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:43.392 [2024-07-14 18:22:50.643605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.392 [2024-07-14 18:22:50.643695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69979 ] 00:06:43.392 [2024-07-14 18:22:50.788565] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69949 has claimed it. 00:06:43.392 [2024-07-14 18:22:50.788635] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69979) - No such process 00:06:43.958 ERROR: process (pid: 69979) is no longer running 00:06:43.958 18:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.958 18:22:51 -- common/autotest_common.sh@852 -- # return 1 00:06:43.958 18:22:51 -- common/autotest_common.sh@643 -- # es=1 00:06:43.958 18:22:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.958 18:22:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.958 18:22:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.959 18:22:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.959 18:22:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.959 18:22:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.959 18:22:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.959 18:22:51 -- event/cpu_locks.sh@141 -- # killprocess 69949 00:06:43.959 18:22:51 -- common/autotest_common.sh@926 -- # '[' -z 69949 ']' 00:06:43.959 18:22:51 -- common/autotest_common.sh@930 -- # kill -0 69949 00:06:43.959 18:22:51 -- common/autotest_common.sh@931 -- # uname 00:06:43.959 18:22:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.959 18:22:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69949 00:06:44.216 18:22:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.216 18:22:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.216 18:22:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69949' 00:06:44.216 killing process with pid 69949 00:06:44.216 18:22:51 -- common/autotest_common.sh@945 -- # kill 69949 00:06:44.216 18:22:51 -- common/autotest_common.sh@950 -- # wait 69949 00:06:44.475 00:06:44.475 real 0m2.267s 00:06:44.475 user 0m6.312s 00:06:44.475 sys 0m0.477s 00:06:44.475 18:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.475 18:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.475 ************************************ 00:06:44.475 END TEST locking_overlapped_coremask 00:06:44.475 ************************************ 00:06:44.475 18:22:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.475 18:22:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.475 18:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.475 18:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.475 ************************************ 00:06:44.475 START TEST locking_overlapped_coremask_via_rpc 00:06:44.475 ************************************ 00:06:44.475 18:22:51 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:44.475 18:22:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70025 00:06:44.475 18:22:51 -- event/cpu_locks.sh@149 -- # waitforlisten 70025 /var/tmp/spdk.sock 00:06:44.475 18:22:51 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.475 18:22:51 -- common/autotest_common.sh@819 -- # '[' -z 70025 ']' 00:06:44.475 18:22:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.475 18:22:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.475 18:22:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.475 18:22:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.475 18:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.735 [2024-07-14 18:22:51.909908] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.735 [2024-07-14 18:22:51.910005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70025 ] 00:06:44.735 [2024-07-14 18:22:52.043063] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.735 [2024-07-14 18:22:52.043163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.735 [2024-07-14 18:22:52.142831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.735 [2024-07-14 18:22:52.143321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.735 [2024-07-14 18:22:52.143455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.735 [2024-07-14 18:22:52.143459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.669 18:22:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.669 18:22:52 -- common/autotest_common.sh@852 -- # return 0 00:06:45.669 18:22:52 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.669 18:22:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70055 00:06:45.669 18:22:52 -- event/cpu_locks.sh@153 -- # waitforlisten 70055 /var/tmp/spdk2.sock 00:06:45.669 18:22:52 -- common/autotest_common.sh@819 -- # '[' -z 70055 ']' 00:06:45.669 18:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.669 18:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.669 18:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.669 18:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.669 18:22:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 [2024-07-14 18:22:52.947779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.669 [2024-07-14 18:22:52.947853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70055 ] 00:06:45.927 [2024-07-14 18:22:53.093536] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.927 [2024-07-14 18:22:53.093595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.927 [2024-07-14 18:22:53.262630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.927 [2024-07-14 18:22:53.263203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.927 [2024-07-14 18:22:53.266642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.927 [2024-07-14 18:22:53.266644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.858 18:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.858 18:22:53 -- common/autotest_common.sh@852 -- # return 0 00:06:46.858 18:22:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.858 18:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.858 18:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.858 18:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.858 18:22:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.858 18:22:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.858 18:22:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.858 18:22:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:46.858 18:22:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.858 18:22:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:46.858 18:22:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.858 18:22:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.858 18:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.858 18:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.858 [2024-07-14 18:22:53.961613] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70025 has claimed it. 00:06:46.858 2024/07/14 18:22:53 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:46.858 request: 00:06:46.858 { 00:06:46.858 "method": "framework_enable_cpumask_locks", 00:06:46.858 "params": {} 00:06:46.858 } 00:06:46.858 Got JSON-RPC error response 00:06:46.858 GoRPCClient: error on JSON-RPC call 00:06:46.858 18:22:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:46.858 18:22:53 -- common/autotest_common.sh@643 -- # es=1 00:06:46.858 18:22:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.858 18:22:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.858 18:22:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.858 18:22:53 -- event/cpu_locks.sh@158 -- # waitforlisten 70025 /var/tmp/spdk.sock 00:06:46.858 18:22:53 -- common/autotest_common.sh@819 -- # '[' -z 70025 ']' 00:06:46.858 18:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.858 18:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.858 18:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.858 18:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.858 18:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.858 18:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.858 18:22:54 -- common/autotest_common.sh@852 -- # return 0 00:06:46.858 18:22:54 -- event/cpu_locks.sh@159 -- # waitforlisten 70055 /var/tmp/spdk2.sock 00:06:46.858 18:22:54 -- common/autotest_common.sh@819 -- # '[' -z 70055 ']' 00:06:46.858 18:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.858 18:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.858 18:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.858 18:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.858 18:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.423 18:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.423 18:22:54 -- common/autotest_common.sh@852 -- # return 0 00:06:47.423 18:22:54 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.423 18:22:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.423 18:22:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.423 18:22:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.423 00:06:47.423 real 0m2.701s 00:06:47.423 user 0m1.398s 00:06:47.423 sys 0m0.233s 00:06:47.423 18:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.423 18:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.423 ************************************ 00:06:47.423 END TEST locking_overlapped_coremask_via_rpc 00:06:47.423 ************************************ 00:06:47.423 18:22:54 -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.423 18:22:54 -- event/cpu_locks.sh@15 -- # [[ -z 70025 ]] 00:06:47.423 18:22:54 -- event/cpu_locks.sh@15 -- # killprocess 70025 00:06:47.423 18:22:54 -- common/autotest_common.sh@926 -- # '[' -z 70025 ']' 00:06:47.423 18:22:54 -- common/autotest_common.sh@930 -- # kill -0 70025 00:06:47.423 18:22:54 -- common/autotest_common.sh@931 -- # uname 00:06:47.423 18:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.423 18:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70025 00:06:47.423 18:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.423 18:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.423 killing process with pid 70025 00:06:47.423 18:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70025' 00:06:47.423 18:22:54 -- common/autotest_common.sh@945 -- # kill 70025 00:06:47.423 18:22:54 -- common/autotest_common.sh@950 -- # wait 70025 00:06:47.682 18:22:54 -- event/cpu_locks.sh@16 -- # [[ -z 70055 ]] 00:06:47.682 18:22:54 -- event/cpu_locks.sh@16 -- # killprocess 70055 00:06:47.682 18:22:54 -- common/autotest_common.sh@926 -- # '[' -z 70055 ']' 00:06:47.682 18:22:54 -- common/autotest_common.sh@930 -- # kill -0 70055 00:06:47.682 18:22:54 -- common/autotest_common.sh@931 -- # uname 00:06:47.682 18:22:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.682 18:22:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70055 00:06:47.682 18:22:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:47.682 killing process with pid 70055 00:06:47.682 18:22:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:47.682 18:22:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70055' 00:06:47.682 18:22:55 -- common/autotest_common.sh@945 -- # kill 70055 00:06:47.682 18:22:55 -- common/autotest_common.sh@950 -- # wait 70055 00:06:48.249 18:22:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.249 18:22:55 -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.249 18:22:55 -- event/cpu_locks.sh@15 -- # [[ -z 70025 ]] 00:06:48.249 18:22:55 -- event/cpu_locks.sh@15 -- # killprocess 70025 00:06:48.249 18:22:55 -- common/autotest_common.sh@926 -- # '[' -z 70025 ']' 00:06:48.249 18:22:55 -- common/autotest_common.sh@930 -- # kill -0 70025 00:06:48.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (70025) - No such process 00:06:48.249 Process with pid 70025 is not found 00:06:48.249 18:22:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 70025 is not found' 00:06:48.249 18:22:55 -- event/cpu_locks.sh@16 -- # [[ -z 70055 ]] 00:06:48.249 18:22:55 -- event/cpu_locks.sh@16 -- # killprocess 70055 00:06:48.249 18:22:55 -- common/autotest_common.sh@926 -- # '[' -z 70055 ']' 00:06:48.249 18:22:55 -- common/autotest_common.sh@930 -- # kill -0 70055 00:06:48.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (70055) - No such process 00:06:48.249 Process with pid 70055 is not found 00:06:48.249 18:22:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 70055 is not found' 00:06:48.249 18:22:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.249 00:06:48.249 real 0m20.710s 00:06:48.249 user 0m36.390s 00:06:48.249 sys 0m5.692s 00:06:48.249 18:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.249 18:22:55 -- common/autotest_common.sh@10 -- # set +x 00:06:48.249 ************************************ 00:06:48.249 END TEST cpu_locks 00:06:48.249 ************************************ 00:06:48.249 00:06:48.249 real 0m48.337s 00:06:48.249 user 1m33.496s 00:06:48.249 sys 0m9.523s 00:06:48.249 18:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.249 ************************************ 00:06:48.249 18:22:55 -- common/autotest_common.sh@10 -- # set +x 00:06:48.249 END TEST event 00:06:48.249 ************************************ 00:06:48.249 18:22:55 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.249 18:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.249 18:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.249 18:22:55 -- common/autotest_common.sh@10 -- # set +x 00:06:48.249 ************************************ 00:06:48.249 START TEST thread 00:06:48.249 ************************************ 00:06:48.249 18:22:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.249 * Looking for test storage... 00:06:48.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:48.249 18:22:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.249 18:22:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:48.249 18:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.249 18:22:55 -- common/autotest_common.sh@10 -- # set +x 00:06:48.249 ************************************ 00:06:48.249 START TEST thread_poller_perf 00:06:48.249 ************************************ 00:06:48.249 18:22:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.249 [2024-07-14 18:22:55.631270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.249 [2024-07-14 18:22:55.631419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ] 00:06:48.508 [2024-07-14 18:22:55.779366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.508 [2024-07-14 18:22:55.878341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.508 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.905 ====================================== 00:06:49.905 busy:2208263514 (cyc) 00:06:49.905 total_run_count: 310000 00:06:49.905 tsc_hz: 2200000000 (cyc) 00:06:49.905 ====================================== 00:06:49.905 poller_cost: 7123 (cyc), 3237 (nsec) 00:06:49.905 00:06:49.905 real 0m1.350s 00:06:49.905 user 0m1.176s 00:06:49.905 sys 0m0.065s 00:06:49.905 18:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.905 ************************************ 00:06:49.905 END TEST thread_poller_perf 00:06:49.905 ************************************ 00:06:49.905 18:22:56 -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 18:22:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.905 18:22:56 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:49.905 18:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.905 18:22:56 -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 ************************************ 00:06:49.905 START TEST thread_poller_perf 00:06:49.905 ************************************ 00:06:49.905 18:22:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.905 [2024-07-14 18:22:57.025353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.905 [2024-07-14 18:22:57.025458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70237 ] 00:06:49.905 [2024-07-14 18:22:57.162652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.905 [2024-07-14 18:22:57.243661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.905 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.281 ====================================== 00:06:51.281 busy:2202706603 (cyc) 00:06:51.281 total_run_count: 4090000 00:06:51.281 tsc_hz: 2200000000 (cyc) 00:06:51.281 ====================================== 00:06:51.281 poller_cost: 538 (cyc), 244 (nsec) 00:06:51.281 00:06:51.281 real 0m1.311s 00:06:51.281 user 0m1.147s 00:06:51.282 sys 0m0.056s 00:06:51.282 18:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.282 18:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 ************************************ 00:06:51.282 END TEST thread_poller_perf 00:06:51.282 ************************************ 00:06:51.282 18:22:58 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.282 00:06:51.282 real 0m2.846s 00:06:51.282 user 0m2.389s 00:06:51.282 sys 0m0.236s 00:06:51.282 18:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.282 ************************************ 00:06:51.282 END TEST thread 00:06:51.282 ************************************ 00:06:51.282 18:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 18:22:58 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:51.282 18:22:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.282 18:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.282 18:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 ************************************ 00:06:51.282 START TEST accel 00:06:51.282 ************************************ 00:06:51.282 18:22:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:51.282 * Looking for test storage... 00:06:51.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:51.282 18:22:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:51.282 18:22:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:51.282 18:22:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.282 18:22:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=70311 00:06:51.282 18:22:58 -- accel/accel.sh@60 -- # waitforlisten 70311 00:06:51.282 18:22:58 -- common/autotest_common.sh@819 -- # '[' -z 70311 ']' 00:06:51.282 18:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.282 18:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.282 18:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.282 18:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.282 18:22:58 -- accel/accel.sh@58 -- # build_accel_config 00:06:51.282 18:22:58 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:51.282 18:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 18:22:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.282 18:22:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.282 18:22:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.282 18:22:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.282 18:22:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.282 18:22:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.282 18:22:58 -- accel/accel.sh@42 -- # jq -r . 00:06:51.282 [2024-07-14 18:22:58.556092] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:51.282 [2024-07-14 18:22:58.556230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70311 ] 00:06:51.282 [2024-07-14 18:22:58.693310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.540 [2024-07-14 18:22:58.786015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.540 [2024-07-14 18:22:58.786212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.107 18:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.107 18:22:59 -- common/autotest_common.sh@852 -- # return 0 00:06:52.107 18:22:59 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:52.107 18:22:59 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:52.107 18:22:59 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:52.108 18:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.108 18:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:52.366 18:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.366 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.366 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.366 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.366 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.366 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.366 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.366 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # IFS== 00:06:52.367 18:22:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.367 18:22:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.367 18:22:59 -- accel/accel.sh@67 -- # killprocess 70311 00:06:52.367 18:22:59 -- common/autotest_common.sh@926 -- # '[' -z 70311 ']' 00:06:52.367 18:22:59 -- common/autotest_common.sh@930 -- # kill -0 70311 00:06:52.367 18:22:59 -- common/autotest_common.sh@931 -- # uname 00:06:52.367 18:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.367 18:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70311 00:06:52.367 18:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.367 killing process with pid 70311 00:06:52.367 18:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.367 18:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70311' 00:06:52.367 18:22:59 -- common/autotest_common.sh@945 -- # kill 70311 00:06:52.367 18:22:59 -- common/autotest_common.sh@950 -- # wait 70311 00:06:52.625 18:22:59 -- accel/accel.sh@68 -- # trap - ERR 00:06:52.625 18:22:59 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:52.625 18:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:52.625 18:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.625 18:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:52.625 18:22:59 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:52.625 18:22:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:52.625 18:22:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.625 18:22:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.625 18:22:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.625 18:22:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.625 18:22:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.625 18:22:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.625 18:22:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.625 18:22:59 -- accel/accel.sh@42 -- # jq -r . 00:06:52.625 18:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.625 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:06:52.884 18:23:00 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:52.884 18:23:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:52.884 18:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.884 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:06:52.884 ************************************ 00:06:52.884 START TEST accel_missing_filename 00:06:52.884 ************************************ 00:06:52.884 18:23:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:52.884 18:23:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.884 18:23:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:52.884 18:23:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:52.884 18:23:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.884 18:23:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:52.884 18:23:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.884 18:23:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:52.884 18:23:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:52.884 18:23:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.884 18:23:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.884 18:23:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.884 18:23:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.884 18:23:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.884 18:23:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.884 18:23:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.884 18:23:00 -- accel/accel.sh@42 -- # jq -r . 00:06:52.884 [2024-07-14 18:23:00.094595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.884 [2024-07-14 18:23:00.094699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70380 ] 00:06:52.884 [2024-07-14 18:23:00.236006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.142 [2024-07-14 18:23:00.339061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.142 [2024-07-14 18:23:00.396097] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.142 [2024-07-14 18:23:00.471885] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:53.142 A filename is required. 00:06:53.142 18:23:00 -- common/autotest_common.sh@643 -- # es=234 00:06:53.142 18:23:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.142 18:23:00 -- common/autotest_common.sh@652 -- # es=106 00:06:53.142 18:23:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:53.142 18:23:00 -- common/autotest_common.sh@660 -- # es=1 00:06:53.142 18:23:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.142 00:06:53.142 real 0m0.479s 00:06:53.142 user 0m0.311s 00:06:53.142 sys 0m0.112s 00:06:53.142 18:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.142 ************************************ 00:06:53.142 END TEST accel_missing_filename 00:06:53.142 ************************************ 00:06:53.142 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.400 18:23:00 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.400 18:23:00 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:53.400 18:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.400 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.400 ************************************ 00:06:53.400 START TEST accel_compress_verify 00:06:53.400 ************************************ 00:06:53.400 18:23:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.400 18:23:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.400 18:23:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.400 18:23:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:53.400 18:23:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.400 18:23:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:53.400 18:23:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.400 18:23:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.400 18:23:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.400 18:23:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.400 18:23:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.400 18:23:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.400 18:23:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.400 18:23:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.400 18:23:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.400 18:23:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.400 18:23:00 -- accel/accel.sh@42 -- # jq -r . 00:06:53.400 [2024-07-14 18:23:00.617143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.400 [2024-07-14 18:23:00.617245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70407 ] 00:06:53.400 [2024-07-14 18:23:00.757940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.659 [2024-07-14 18:23:00.853766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.659 [2024-07-14 18:23:00.908297] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.659 [2024-07-14 18:23:00.982719] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:53.659 00:06:53.659 Compression does not support the verify option, aborting. 00:06:53.659 18:23:01 -- common/autotest_common.sh@643 -- # es=161 00:06:53.659 18:23:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.659 18:23:01 -- common/autotest_common.sh@652 -- # es=33 00:06:53.659 18:23:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:53.659 18:23:01 -- common/autotest_common.sh@660 -- # es=1 00:06:53.659 18:23:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.659 00:06:53.659 real 0m0.466s 00:06:53.659 user 0m0.312s 00:06:53.659 sys 0m0.105s 00:06:53.659 18:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.659 ************************************ 00:06:53.659 END TEST accel_compress_verify 00:06:53.659 ************************************ 00:06:53.659 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.917 18:23:01 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:53.917 18:23:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.917 18:23:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.917 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.917 ************************************ 00:06:53.917 START TEST accel_wrong_workload 00:06:53.917 ************************************ 00:06:53.917 18:23:01 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:53.917 18:23:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.917 18:23:01 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:53.917 18:23:01 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:53.917 18:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.917 18:23:01 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:53.917 18:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.917 18:23:01 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:53.917 18:23:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:53.917 18:23:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.917 18:23:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.917 18:23:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.917 18:23:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.917 18:23:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.917 18:23:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.917 18:23:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.917 18:23:01 -- accel/accel.sh@42 -- # jq -r . 00:06:53.917 Unsupported workload type: foobar 00:06:53.917 [2024-07-14 18:23:01.131656] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:53.917 accel_perf options: 00:06:53.917 [-h help message] 00:06:53.917 [-q queue depth per core] 00:06:53.917 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.917 [-T number of threads per core 00:06:53.917 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.917 [-t time in seconds] 00:06:53.917 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.917 [ dif_verify, , dif_generate, dif_generate_copy 00:06:53.917 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.917 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.917 [-S for crc32c workload, use this seed value (default 0) 00:06:53.917 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.917 [-f for fill workload, use this BYTE value (default 255) 00:06:53.917 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.917 [-y verify result if this switch is on] 00:06:53.917 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.917 Can be used to spread operations across a wider range of memory. 00:06:53.917 18:23:01 -- common/autotest_common.sh@643 -- # es=1 00:06:53.917 18:23:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.917 18:23:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:53.918 18:23:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.918 00:06:53.918 real 0m0.028s 00:06:53.918 user 0m0.015s 00:06:53.918 sys 0m0.013s 00:06:53.918 18:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.918 ************************************ 00:06:53.918 END TEST accel_wrong_workload 00:06:53.918 ************************************ 00:06:53.918 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.918 18:23:01 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.918 18:23:01 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:53.918 18:23:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.918 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.918 ************************************ 00:06:53.918 START TEST accel_negative_buffers 00:06:53.918 ************************************ 00:06:53.918 18:23:01 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.918 18:23:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.918 18:23:01 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:53.918 18:23:01 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:53.918 18:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.918 18:23:01 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:53.918 18:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.918 18:23:01 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:53.918 18:23:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:53.918 18:23:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.918 18:23:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.918 18:23:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.918 18:23:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.918 18:23:01 -- accel/accel.sh@42 -- # jq -r . 00:06:53.918 -x option must be non-negative. 00:06:53.918 [2024-07-14 18:23:01.201673] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:53.918 accel_perf options: 00:06:53.918 [-h help message] 00:06:53.918 [-q queue depth per core] 00:06:53.918 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.918 [-T number of threads per core 00:06:53.918 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.918 [-t time in seconds] 00:06:53.918 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.918 [ dif_verify, , dif_generate, dif_generate_copy 00:06:53.918 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.918 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.918 [-S for crc32c workload, use this seed value (default 0) 00:06:53.918 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.918 [-f for fill workload, use this BYTE value (default 255) 00:06:53.918 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.918 [-y verify result if this switch is on] 00:06:53.918 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.918 Can be used to spread operations across a wider range of memory. 00:06:53.918 18:23:01 -- common/autotest_common.sh@643 -- # es=1 00:06:53.918 18:23:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.918 18:23:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:53.918 18:23:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.918 00:06:53.918 real 0m0.026s 00:06:53.918 user 0m0.011s 00:06:53.918 sys 0m0.015s 00:06:53.918 18:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.918 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.918 ************************************ 00:06:53.918 END TEST accel_negative_buffers 00:06:53.918 ************************************ 00:06:53.918 18:23:01 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:53.918 18:23:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:53.918 18:23:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.918 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.918 ************************************ 00:06:53.918 START TEST accel_crc32c 00:06:53.918 ************************************ 00:06:53.918 18:23:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:53.918 18:23:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.918 18:23:01 -- accel/accel.sh@17 -- # local accel_module 00:06:53.918 18:23:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.918 18:23:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.918 18:23:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.918 18:23:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.918 18:23:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.918 18:23:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.918 18:23:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.918 18:23:01 -- accel/accel.sh@42 -- # jq -r . 00:06:53.918 [2024-07-14 18:23:01.276672] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.918 [2024-07-14 18:23:01.276785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:06:54.177 [2024-07-14 18:23:01.413964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.177 [2024-07-14 18:23:01.509010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.552 18:23:02 -- accel/accel.sh@18 -- # out=' 00:06:55.552 SPDK Configuration: 00:06:55.552 Core mask: 0x1 00:06:55.552 00:06:55.552 Accel Perf Configuration: 00:06:55.552 Workload Type: crc32c 00:06:55.552 CRC-32C seed: 32 00:06:55.552 Transfer size: 4096 bytes 00:06:55.552 Vector count 1 00:06:55.552 Module: software 00:06:55.552 Queue depth: 32 00:06:55.552 Allocate depth: 32 00:06:55.552 # threads/core: 1 00:06:55.552 Run time: 1 seconds 00:06:55.552 Verify: Yes 00:06:55.552 00:06:55.552 Running for 1 seconds... 00:06:55.552 00:06:55.552 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.552 ------------------------------------------------------------------------------------ 00:06:55.552 0,0 446912/s 1745 MiB/s 0 0 00:06:55.552 ==================================================================================== 00:06:55.552 Total 446912/s 1745 MiB/s 0 0' 00:06:55.552 18:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:55.552 18:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:55.552 18:23:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.552 18:23:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.552 18:23:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.552 18:23:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.552 18:23:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.552 18:23:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.552 18:23:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.552 18:23:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.552 18:23:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.552 18:23:02 -- accel/accel.sh@42 -- # jq -r . 00:06:55.552 [2024-07-14 18:23:02.741247] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.552 [2024-07-14 18:23:02.741342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70489 ] 00:06:55.552 [2024-07-14 18:23:02.880675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.811 [2024-07-14 18:23:02.975693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=0x1 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=crc32c 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=32 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=software 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=32 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=32 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=1 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val=Yes 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.811 18:23:03 -- accel/accel.sh@21 -- # val= 00:06:55.811 18:23:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.811 18:23:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@21 -- # val= 00:06:56.767 18:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.767 18:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.767 18:23:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.767 18:23:04 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:56.767 18:23:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.767 00:06:56.767 real 0m2.932s 00:06:56.767 user 0m2.504s 00:06:56.767 sys 0m0.223s 00:06:56.767 18:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.767 18:23:04 -- common/autotest_common.sh@10 -- # set +x 00:06:56.767 ************************************ 00:06:56.767 END TEST accel_crc32c 00:06:56.767 ************************************ 00:06:57.025 18:23:04 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:57.025 18:23:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:57.026 18:23:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.026 18:23:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.026 ************************************ 00:06:57.026 START TEST accel_crc32c_C2 00:06:57.026 ************************************ 00:06:57.026 18:23:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:57.026 18:23:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.026 18:23:04 -- accel/accel.sh@17 -- # local accel_module 00:06:57.026 18:23:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.026 18:23:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.026 18:23:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.026 18:23:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.026 18:23:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.026 18:23:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.026 18:23:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.026 18:23:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.026 18:23:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.026 18:23:04 -- accel/accel.sh@42 -- # jq -r . 00:06:57.026 [2024-07-14 18:23:04.253733] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.026 [2024-07-14 18:23:04.253856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70519 ] 00:06:57.026 [2024-07-14 18:23:04.393808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.284 [2024-07-14 18:23:04.486391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.658 18:23:05 -- accel/accel.sh@18 -- # out=' 00:06:58.658 SPDK Configuration: 00:06:58.658 Core mask: 0x1 00:06:58.658 00:06:58.658 Accel Perf Configuration: 00:06:58.658 Workload Type: crc32c 00:06:58.658 CRC-32C seed: 0 00:06:58.658 Transfer size: 4096 bytes 00:06:58.658 Vector count 2 00:06:58.658 Module: software 00:06:58.658 Queue depth: 32 00:06:58.658 Allocate depth: 32 00:06:58.658 # threads/core: 1 00:06:58.658 Run time: 1 seconds 00:06:58.658 Verify: Yes 00:06:58.658 00:06:58.658 Running for 1 seconds... 00:06:58.658 00:06:58.658 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.658 ------------------------------------------------------------------------------------ 00:06:58.658 0,0 344896/s 2694 MiB/s 0 0 00:06:58.658 ==================================================================================== 00:06:58.658 Total 344896/s 1347 MiB/s 0 0' 00:06:58.658 18:23:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.658 18:23:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.658 18:23:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.658 18:23:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.658 18:23:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.658 18:23:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.658 18:23:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.658 18:23:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.658 18:23:05 -- accel/accel.sh@42 -- # jq -r . 00:06:58.658 [2024-07-14 18:23:05.713985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.658 [2024-07-14 18:23:05.714107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70539 ] 00:06:58.658 [2024-07-14 18:23:05.856871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.658 [2024-07-14 18:23:05.944871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.658 18:23:05 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:05 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:05 -- accel/accel.sh@21 -- # val=0x1 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=0 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=software 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=32 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=32 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=1 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val=Yes 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.658 18:23:06 -- accel/accel.sh@21 -- # val= 00:06:58.658 18:23:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.658 18:23:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@21 -- # val= 00:07:00.031 18:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.031 18:23:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.031 18:23:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.031 18:23:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:00.031 18:23:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.031 00:07:00.031 real 0m2.923s 00:07:00.031 user 0m2.502s 00:07:00.031 sys 0m0.220s 00:07:00.031 18:23:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.031 18:23:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.031 ************************************ 00:07:00.031 END TEST accel_crc32c_C2 00:07:00.031 ************************************ 00:07:00.031 18:23:07 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:00.031 18:23:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:00.031 18:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.031 18:23:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.031 ************************************ 00:07:00.031 START TEST accel_copy 00:07:00.031 ************************************ 00:07:00.031 18:23:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:00.031 18:23:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.031 18:23:07 -- accel/accel.sh@17 -- # local accel_module 00:07:00.031 18:23:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:00.031 18:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.031 18:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.031 18:23:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.031 18:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.031 18:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.031 18:23:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.031 18:23:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.031 18:23:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.031 18:23:07 -- accel/accel.sh@42 -- # jq -r . 00:07:00.031 [2024-07-14 18:23:07.221477] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.031 [2024-07-14 18:23:07.221569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70573 ] 00:07:00.031 [2024-07-14 18:23:07.355163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.031 [2024-07-14 18:23:07.444795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.403 18:23:08 -- accel/accel.sh@18 -- # out=' 00:07:01.403 SPDK Configuration: 00:07:01.403 Core mask: 0x1 00:07:01.403 00:07:01.403 Accel Perf Configuration: 00:07:01.403 Workload Type: copy 00:07:01.403 Transfer size: 4096 bytes 00:07:01.403 Vector count 1 00:07:01.403 Module: software 00:07:01.403 Queue depth: 32 00:07:01.403 Allocate depth: 32 00:07:01.403 # threads/core: 1 00:07:01.403 Run time: 1 seconds 00:07:01.403 Verify: Yes 00:07:01.403 00:07:01.403 Running for 1 seconds... 00:07:01.403 00:07:01.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.403 ------------------------------------------------------------------------------------ 00:07:01.403 0,0 313088/s 1223 MiB/s 0 0 00:07:01.403 ==================================================================================== 00:07:01.403 Total 313088/s 1223 MiB/s 0 0' 00:07:01.403 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.403 18:23:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.403 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.403 18:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.403 18:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.403 18:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.403 18:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.403 18:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.403 18:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.403 18:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.403 18:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.403 18:23:08 -- accel/accel.sh@42 -- # jq -r . 00:07:01.403 [2024-07-14 18:23:08.673811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.403 [2024-07-14 18:23:08.673923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70593 ] 00:07:01.403 [2024-07-14 18:23:08.817462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.660 [2024-07-14 18:23:08.909339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.660 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.660 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=0x1 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=copy 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=software 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=32 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=32 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=1 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val=Yes 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 18:23:08 -- accel/accel.sh@21 -- # val= 00:07:01.661 18:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 18:23:08 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 ************************************ 00:07:03.035 END TEST accel_copy 00:07:03.035 ************************************ 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@21 -- # val= 00:07:03.035 18:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.035 18:23:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.035 18:23:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.035 18:23:10 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:03.035 18:23:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.035 00:07:03.035 real 0m2.917s 00:07:03.035 user 0m2.494s 00:07:03.035 sys 0m0.219s 00:07:03.035 18:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.035 18:23:10 -- common/autotest_common.sh@10 -- # set +x 00:07:03.035 18:23:10 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.035 18:23:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:03.035 18:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.035 18:23:10 -- common/autotest_common.sh@10 -- # set +x 00:07:03.035 ************************************ 00:07:03.035 START TEST accel_fill 00:07:03.035 ************************************ 00:07:03.035 18:23:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.035 18:23:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.035 18:23:10 -- accel/accel.sh@17 -- # local accel_module 00:07:03.035 18:23:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.035 18:23:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.035 18:23:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.035 18:23:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.035 18:23:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.035 18:23:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.035 18:23:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.035 18:23:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.035 18:23:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.035 18:23:10 -- accel/accel.sh@42 -- # jq -r . 00:07:03.035 [2024-07-14 18:23:10.191443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.035 [2024-07-14 18:23:10.191557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70627 ] 00:07:03.035 [2024-07-14 18:23:10.330548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.035 [2024-07-14 18:23:10.422877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.408 18:23:11 -- accel/accel.sh@18 -- # out=' 00:07:04.408 SPDK Configuration: 00:07:04.408 Core mask: 0x1 00:07:04.408 00:07:04.408 Accel Perf Configuration: 00:07:04.408 Workload Type: fill 00:07:04.408 Fill pattern: 0x80 00:07:04.408 Transfer size: 4096 bytes 00:07:04.408 Vector count 1 00:07:04.408 Module: software 00:07:04.408 Queue depth: 64 00:07:04.408 Allocate depth: 64 00:07:04.408 # threads/core: 1 00:07:04.408 Run time: 1 seconds 00:07:04.408 Verify: Yes 00:07:04.408 00:07:04.408 Running for 1 seconds... 00:07:04.408 00:07:04.408 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.408 ------------------------------------------------------------------------------------ 00:07:04.409 0,0 456640/s 1783 MiB/s 0 0 00:07:04.409 ==================================================================================== 00:07:04.409 Total 456640/s 1783 MiB/s 0 0' 00:07:04.409 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 18:23:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.409 18:23:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.409 18:23:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.409 18:23:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.409 18:23:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.409 18:23:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.409 18:23:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.409 18:23:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.409 18:23:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.409 18:23:11 -- accel/accel.sh@42 -- # jq -r . 00:07:04.409 [2024-07-14 18:23:11.652983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.409 [2024-07-14 18:23:11.653078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70647 ] 00:07:04.409 [2024-07-14 18:23:11.788016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.666 [2024-07-14 18:23:11.878107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=0x1 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=fill 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=0x80 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=software 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=64 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=64 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=1 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val=Yes 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.666 18:23:11 -- accel/accel.sh@21 -- # val= 00:07:04.666 18:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # IFS=: 00:07:04.666 18:23:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 ************************************ 00:07:06.040 END TEST accel_fill 00:07:06.040 ************************************ 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@21 -- # val= 00:07:06.040 18:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.040 18:23:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.040 18:23:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.040 18:23:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:06.040 18:23:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.040 00:07:06.040 real 0m2.920s 00:07:06.040 user 0m2.481s 00:07:06.040 sys 0m0.233s 00:07:06.040 18:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.040 18:23:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.040 18:23:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:06.040 18:23:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.040 18:23:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.040 18:23:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.040 ************************************ 00:07:06.040 START TEST accel_copy_crc32c 00:07:06.040 ************************************ 00:07:06.040 18:23:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:06.040 18:23:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.040 18:23:13 -- accel/accel.sh@17 -- # local accel_module 00:07:06.040 18:23:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.040 18:23:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.040 18:23:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.040 18:23:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.040 18:23:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.040 18:23:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.040 18:23:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.040 18:23:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.040 18:23:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.040 18:23:13 -- accel/accel.sh@42 -- # jq -r . 00:07:06.040 [2024-07-14 18:23:13.155521] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.040 [2024-07-14 18:23:13.155606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70676 ] 00:07:06.040 [2024-07-14 18:23:13.290871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.040 [2024-07-14 18:23:13.380617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.413 18:23:14 -- accel/accel.sh@18 -- # out=' 00:07:07.413 SPDK Configuration: 00:07:07.413 Core mask: 0x1 00:07:07.413 00:07:07.413 Accel Perf Configuration: 00:07:07.413 Workload Type: copy_crc32c 00:07:07.413 CRC-32C seed: 0 00:07:07.413 Vector size: 4096 bytes 00:07:07.413 Transfer size: 4096 bytes 00:07:07.413 Vector count 1 00:07:07.413 Module: software 00:07:07.413 Queue depth: 32 00:07:07.413 Allocate depth: 32 00:07:07.413 # threads/core: 1 00:07:07.413 Run time: 1 seconds 00:07:07.413 Verify: Yes 00:07:07.413 00:07:07.413 Running for 1 seconds... 00:07:07.413 00:07:07.413 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.413 ------------------------------------------------------------------------------------ 00:07:07.413 0,0 247072/s 965 MiB/s 0 0 00:07:07.413 ==================================================================================== 00:07:07.413 Total 247072/s 965 MiB/s 0 0' 00:07:07.413 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.413 18:23:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.413 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.413 18:23:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.413 18:23:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.413 18:23:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.413 18:23:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.414 18:23:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.414 18:23:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.414 18:23:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.414 18:23:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.414 18:23:14 -- accel/accel.sh@42 -- # jq -r . 00:07:07.414 [2024-07-14 18:23:14.609579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.414 [2024-07-14 18:23:14.609675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70701 ] 00:07:07.414 [2024-07-14 18:23:14.745390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.414 [2024-07-14 18:23:14.835339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=0x1 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=0 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=software 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=32 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=32 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=1 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val=Yes 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:07.673 18:23:14 -- accel/accel.sh@21 -- # val= 00:07:07.673 18:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # IFS=: 00:07:07.673 18:23:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 ************************************ 00:07:09.048 END TEST accel_copy_crc32c 00:07:09.048 ************************************ 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@21 -- # val= 00:07:09.048 18:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.048 18:23:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.048 18:23:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.048 18:23:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:09.048 18:23:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.048 00:07:09.048 real 0m2.916s 00:07:09.048 user 0m2.487s 00:07:09.048 sys 0m0.224s 00:07:09.049 18:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.049 18:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 18:23:16 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.049 18:23:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:09.049 18:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.049 18:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 START TEST accel_copy_crc32c_C2 00:07:09.049 ************************************ 00:07:09.049 18:23:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.049 18:23:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.049 18:23:16 -- accel/accel.sh@17 -- # local accel_module 00:07:09.049 18:23:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:09.049 18:23:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:09.049 18:23:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.049 18:23:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.049 18:23:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.049 18:23:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.049 18:23:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.049 18:23:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.049 18:23:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.049 18:23:16 -- accel/accel.sh@42 -- # jq -r . 00:07:09.049 [2024-07-14 18:23:16.114379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.049 [2024-07-14 18:23:16.114477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70730 ] 00:07:09.049 [2024-07-14 18:23:16.253731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.049 [2024-07-14 18:23:16.344052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.424 18:23:17 -- accel/accel.sh@18 -- # out=' 00:07:10.424 SPDK Configuration: 00:07:10.424 Core mask: 0x1 00:07:10.424 00:07:10.424 Accel Perf Configuration: 00:07:10.424 Workload Type: copy_crc32c 00:07:10.424 CRC-32C seed: 0 00:07:10.424 Vector size: 4096 bytes 00:07:10.424 Transfer size: 8192 bytes 00:07:10.424 Vector count 2 00:07:10.424 Module: software 00:07:10.424 Queue depth: 32 00:07:10.424 Allocate depth: 32 00:07:10.424 # threads/core: 1 00:07:10.424 Run time: 1 seconds 00:07:10.424 Verify: Yes 00:07:10.424 00:07:10.424 Running for 1 seconds... 00:07:10.424 00:07:10.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.424 ------------------------------------------------------------------------------------ 00:07:10.424 0,0 178048/s 1391 MiB/s 0 0 00:07:10.424 ==================================================================================== 00:07:10.424 Total 178048/s 695 MiB/s 0 0' 00:07:10.424 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.424 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.424 18:23:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.424 18:23:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.424 18:23:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.424 18:23:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.424 18:23:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.424 18:23:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.424 18:23:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.424 18:23:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.424 18:23:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.424 18:23:17 -- accel/accel.sh@42 -- # jq -r . 00:07:10.424 [2024-07-14 18:23:17.566744] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.424 [2024-07-14 18:23:17.566822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70749 ] 00:07:10.424 [2024-07-14 18:23:17.698786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.424 [2024-07-14 18:23:17.787768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.424 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=0x1 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=0 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=software 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=32 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=32 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=1 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val=Yes 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.683 18:23:17 -- accel/accel.sh@21 -- # val= 00:07:10.683 18:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.683 18:23:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.618 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.618 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.618 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.618 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.618 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.618 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.618 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.618 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.618 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.618 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.618 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.619 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.619 18:23:18 -- accel/accel.sh@21 -- # val= 00:07:11.619 18:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.619 18:23:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.619 18:23:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.619 18:23:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.619 18:23:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:11.619 18:23:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.619 00:07:11.619 real 0m2.900s 00:07:11.619 user 0m2.480s 00:07:11.619 sys 0m0.215s 00:07:11.619 18:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.619 18:23:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.619 ************************************ 00:07:11.619 END TEST accel_copy_crc32c_C2 00:07:11.619 ************************************ 00:07:11.619 18:23:19 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:11.619 18:23:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:11.619 18:23:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.619 18:23:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.619 ************************************ 00:07:11.619 START TEST accel_dualcast 00:07:11.619 ************************************ 00:07:11.619 18:23:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:11.619 18:23:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.619 18:23:19 -- accel/accel.sh@17 -- # local accel_module 00:07:11.877 18:23:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:11.877 18:23:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:11.877 18:23:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.877 18:23:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.878 18:23:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.878 18:23:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.878 18:23:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.878 18:23:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.878 18:23:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.878 18:23:19 -- accel/accel.sh@42 -- # jq -r . 00:07:11.878 [2024-07-14 18:23:19.063519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.878 [2024-07-14 18:23:19.063613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:07:11.878 [2024-07-14 18:23:19.197228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.878 [2024-07-14 18:23:19.287099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.269 18:23:20 -- accel/accel.sh@18 -- # out=' 00:07:13.269 SPDK Configuration: 00:07:13.269 Core mask: 0x1 00:07:13.269 00:07:13.269 Accel Perf Configuration: 00:07:13.269 Workload Type: dualcast 00:07:13.269 Transfer size: 4096 bytes 00:07:13.269 Vector count 1 00:07:13.269 Module: software 00:07:13.269 Queue depth: 32 00:07:13.269 Allocate depth: 32 00:07:13.269 # threads/core: 1 00:07:13.269 Run time: 1 seconds 00:07:13.269 Verify: Yes 00:07:13.269 00:07:13.269 Running for 1 seconds... 00:07:13.269 00:07:13.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.269 ------------------------------------------------------------------------------------ 00:07:13.269 0,0 347168/s 1356 MiB/s 0 0 00:07:13.269 ==================================================================================== 00:07:13.269 Total 347168/s 1356 MiB/s 0 0' 00:07:13.269 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.269 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.269 18:23:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.269 18:23:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.269 18:23:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.269 18:23:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.269 18:23:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.269 18:23:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.269 18:23:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.269 18:23:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.269 18:23:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.269 18:23:20 -- accel/accel.sh@42 -- # jq -r . 00:07:13.269 [2024-07-14 18:23:20.515628] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.269 [2024-07-14 18:23:20.515733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70799 ] 00:07:13.269 [2024-07-14 18:23:20.658193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.538 [2024-07-14 18:23:20.749573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.538 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.538 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.538 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.538 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.538 18:23:20 -- accel/accel.sh@21 -- # val=0x1 00:07:13.538 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.538 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=dualcast 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=software 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=32 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=32 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=1 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val=Yes 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.539 18:23:20 -- accel/accel.sh@21 -- # val= 00:07:13.539 18:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.539 18:23:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@21 -- # val= 00:07:14.914 18:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.914 18:23:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.914 18:23:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.914 ************************************ 00:07:14.914 END TEST accel_dualcast 00:07:14.914 ************************************ 00:07:14.914 18:23:21 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:14.914 18:23:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.914 00:07:14.914 real 0m2.918s 00:07:14.914 user 0m2.492s 00:07:14.914 sys 0m0.219s 00:07:14.914 18:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.914 18:23:21 -- common/autotest_common.sh@10 -- # set +x 00:07:14.914 18:23:21 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:14.914 18:23:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:14.914 18:23:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.914 18:23:21 -- common/autotest_common.sh@10 -- # set +x 00:07:14.914 ************************************ 00:07:14.914 START TEST accel_compare 00:07:14.914 ************************************ 00:07:14.914 18:23:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:14.914 18:23:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.914 18:23:22 -- accel/accel.sh@17 -- # local accel_module 00:07:14.914 18:23:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:14.914 18:23:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.914 18:23:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.914 18:23:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.914 18:23:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.914 18:23:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.914 18:23:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.914 18:23:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.914 18:23:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.914 18:23:22 -- accel/accel.sh@42 -- # jq -r . 00:07:14.914 [2024-07-14 18:23:22.024202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.914 [2024-07-14 18:23:22.024305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70839 ] 00:07:14.914 [2024-07-14 18:23:22.163140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.914 [2024-07-14 18:23:22.257222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.287 18:23:23 -- accel/accel.sh@18 -- # out=' 00:07:16.287 SPDK Configuration: 00:07:16.287 Core mask: 0x1 00:07:16.287 00:07:16.287 Accel Perf Configuration: 00:07:16.287 Workload Type: compare 00:07:16.287 Transfer size: 4096 bytes 00:07:16.287 Vector count 1 00:07:16.287 Module: software 00:07:16.287 Queue depth: 32 00:07:16.287 Allocate depth: 32 00:07:16.287 # threads/core: 1 00:07:16.287 Run time: 1 seconds 00:07:16.287 Verify: Yes 00:07:16.287 00:07:16.287 Running for 1 seconds... 00:07:16.287 00:07:16.287 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.287 ------------------------------------------------------------------------------------ 00:07:16.287 0,0 449216/s 1754 MiB/s 0 0 00:07:16.287 ==================================================================================== 00:07:16.287 Total 449216/s 1754 MiB/s 0 0' 00:07:16.287 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.287 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.287 18:23:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:16.287 18:23:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.287 18:23:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.287 18:23:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.287 18:23:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.287 18:23:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.287 18:23:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.287 18:23:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.287 18:23:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.287 18:23:23 -- accel/accel.sh@42 -- # jq -r . 00:07:16.287 [2024-07-14 18:23:23.488022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.287 [2024-07-14 18:23:23.488117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:07:16.287 [2024-07-14 18:23:23.625606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.545 [2024-07-14 18:23:23.719736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val=0x1 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val=compare 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val=software 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 18:23:23 -- accel/accel.sh@21 -- # val=32 00:07:16.545 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val=32 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val=1 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val=Yes 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 18:23:23 -- accel/accel.sh@21 -- # val= 00:07:16.546 18:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 18:23:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 ************************************ 00:07:17.918 END TEST accel_compare 00:07:17.918 ************************************ 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@21 -- # val= 00:07:17.918 18:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.918 18:23:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.918 18:23:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.918 18:23:24 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:17.918 18:23:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.918 00:07:17.918 real 0m2.927s 00:07:17.918 user 0m2.498s 00:07:17.918 sys 0m0.218s 00:07:17.919 18:23:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.919 18:23:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 18:23:24 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:17.919 18:23:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.919 18:23:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.919 18:23:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 ************************************ 00:07:17.919 START TEST accel_xor 00:07:17.919 ************************************ 00:07:17.919 18:23:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:17.919 18:23:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.919 18:23:24 -- accel/accel.sh@17 -- # local accel_module 00:07:17.919 18:23:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:17.919 18:23:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.919 18:23:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.919 18:23:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.919 18:23:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.919 18:23:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.919 18:23:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.919 18:23:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.919 18:23:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.919 18:23:24 -- accel/accel.sh@42 -- # jq -r . 00:07:17.919 [2024-07-14 18:23:25.004041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.919 [2024-07-14 18:23:25.004139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70893 ] 00:07:17.919 [2024-07-14 18:23:25.139426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.919 [2024-07-14 18:23:25.230217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.291 18:23:26 -- accel/accel.sh@18 -- # out=' 00:07:19.291 SPDK Configuration: 00:07:19.291 Core mask: 0x1 00:07:19.291 00:07:19.291 Accel Perf Configuration: 00:07:19.291 Workload Type: xor 00:07:19.291 Source buffers: 2 00:07:19.291 Transfer size: 4096 bytes 00:07:19.291 Vector count 1 00:07:19.291 Module: software 00:07:19.291 Queue depth: 32 00:07:19.291 Allocate depth: 32 00:07:19.291 # threads/core: 1 00:07:19.291 Run time: 1 seconds 00:07:19.291 Verify: Yes 00:07:19.291 00:07:19.291 Running for 1 seconds... 00:07:19.291 00:07:19.291 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.291 ------------------------------------------------------------------------------------ 00:07:19.291 0,0 249472/s 974 MiB/s 0 0 00:07:19.291 ==================================================================================== 00:07:19.291 Total 249472/s 974 MiB/s 0 0' 00:07:19.291 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.291 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.291 18:23:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:19.291 18:23:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.291 18:23:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.291 18:23:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.291 18:23:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.291 18:23:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.291 18:23:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.291 18:23:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.291 18:23:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.291 18:23:26 -- accel/accel.sh@42 -- # jq -r . 00:07:19.291 [2024-07-14 18:23:26.463467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.291 [2024-07-14 18:23:26.463585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:07:19.291 [2024-07-14 18:23:26.601052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.291 [2024-07-14 18:23:26.690453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=0x1 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=xor 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=2 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=software 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=32 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=32 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=1 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val=Yes 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 18:23:26 -- accel/accel.sh@21 -- # val= 00:07:19.549 18:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 18:23:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@21 -- # val= 00:07:20.486 18:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.486 18:23:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.486 18:23:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.486 18:23:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:20.486 18:23:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.486 00:07:20.486 real 0m2.916s 00:07:20.486 user 0m2.494s 00:07:20.486 sys 0m0.215s 00:07:20.486 18:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.486 18:23:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.486 ************************************ 00:07:20.486 END TEST accel_xor 00:07:20.486 ************************************ 00:07:20.744 18:23:27 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.744 18:23:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:20.744 18:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.744 18:23:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.744 ************************************ 00:07:20.744 START TEST accel_xor 00:07:20.744 ************************************ 00:07:20.744 18:23:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.744 18:23:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.744 18:23:27 -- accel/accel.sh@17 -- # local accel_module 00:07:20.744 18:23:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.744 18:23:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.744 18:23:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.744 18:23:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.744 18:23:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.744 18:23:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.744 18:23:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.744 18:23:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.744 18:23:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.744 18:23:27 -- accel/accel.sh@42 -- # jq -r . 00:07:20.744 [2024-07-14 18:23:27.968963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.744 [2024-07-14 18:23:27.969054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:07:20.744 [2024-07-14 18:23:28.109681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.002 [2024-07-14 18:23:28.201830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.376 18:23:29 -- accel/accel.sh@18 -- # out=' 00:07:22.376 SPDK Configuration: 00:07:22.376 Core mask: 0x1 00:07:22.376 00:07:22.376 Accel Perf Configuration: 00:07:22.376 Workload Type: xor 00:07:22.376 Source buffers: 3 00:07:22.376 Transfer size: 4096 bytes 00:07:22.376 Vector count 1 00:07:22.376 Module: software 00:07:22.376 Queue depth: 32 00:07:22.376 Allocate depth: 32 00:07:22.376 # threads/core: 1 00:07:22.376 Run time: 1 seconds 00:07:22.376 Verify: Yes 00:07:22.376 00:07:22.376 Running for 1 seconds... 00:07:22.376 00:07:22.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.376 ------------------------------------------------------------------------------------ 00:07:22.376 0,0 236992/s 925 MiB/s 0 0 00:07:22.376 ==================================================================================== 00:07:22.376 Total 236992/s 925 MiB/s 0 0' 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.376 18:23:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.376 18:23:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.376 18:23:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.376 18:23:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.376 18:23:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.376 18:23:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.376 18:23:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.376 18:23:29 -- accel/accel.sh@42 -- # jq -r . 00:07:22.376 [2024-07-14 18:23:29.432992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.376 [2024-07-14 18:23:29.433088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:07:22.376 [2024-07-14 18:23:29.571564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.376 [2024-07-14 18:23:29.662733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=0x1 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=xor 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=3 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=software 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=32 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=32 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=1 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val=Yes 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.376 18:23:29 -- accel/accel.sh@21 -- # val= 00:07:22.376 18:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.376 18:23:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 18:23:30 -- accel/accel.sh@21 -- # val= 00:07:23.750 18:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 18:23:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 ************************************ 00:07:23.750 END TEST accel_xor 00:07:23.750 ************************************ 00:07:23.750 18:23:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.750 18:23:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:23.750 18:23:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.750 00:07:23.750 real 0m2.931s 00:07:23.750 user 0m2.518s 00:07:23.750 sys 0m0.209s 00:07:23.750 18:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.750 18:23:30 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 18:23:30 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:23.750 18:23:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:23.750 18:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.750 18:23:30 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 ************************************ 00:07:23.750 START TEST accel_dif_verify 00:07:23.750 ************************************ 00:07:23.750 18:23:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:23.750 18:23:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.750 18:23:30 -- accel/accel.sh@17 -- # local accel_module 00:07:23.750 18:23:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:23.750 18:23:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.750 18:23:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.750 18:23:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.750 18:23:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.750 18:23:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.750 18:23:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.750 18:23:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.750 18:23:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.750 18:23:30 -- accel/accel.sh@42 -- # jq -r . 00:07:23.750 [2024-07-14 18:23:30.946779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:23.751 [2024-07-14 18:23:30.946910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:07:23.751 [2024-07-14 18:23:31.089170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.030 [2024-07-14 18:23:31.178235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.401 18:23:32 -- accel/accel.sh@18 -- # out=' 00:07:25.401 SPDK Configuration: 00:07:25.401 Core mask: 0x1 00:07:25.401 00:07:25.401 Accel Perf Configuration: 00:07:25.401 Workload Type: dif_verify 00:07:25.401 Vector size: 4096 bytes 00:07:25.401 Transfer size: 4096 bytes 00:07:25.401 Block size: 512 bytes 00:07:25.401 Metadata size: 8 bytes 00:07:25.401 Vector count 1 00:07:25.401 Module: software 00:07:25.401 Queue depth: 32 00:07:25.401 Allocate depth: 32 00:07:25.401 # threads/core: 1 00:07:25.401 Run time: 1 seconds 00:07:25.401 Verify: No 00:07:25.401 00:07:25.401 Running for 1 seconds... 00:07:25.401 00:07:25.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.401 ------------------------------------------------------------------------------------ 00:07:25.401 0,0 99168/s 393 MiB/s 0 0 00:07:25.401 ==================================================================================== 00:07:25.401 Total 99168/s 387 MiB/s 0 0' 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.401 18:23:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.401 18:23:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.401 18:23:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.401 18:23:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.401 18:23:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.401 18:23:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.401 18:23:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.401 18:23:32 -- accel/accel.sh@42 -- # jq -r . 00:07:25.401 [2024-07-14 18:23:32.413029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.401 [2024-07-14 18:23:32.413129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71015 ] 00:07:25.401 [2024-07-14 18:23:32.551061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.401 [2024-07-14 18:23:32.641418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=0x1 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=dif_verify 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=software 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=32 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=32 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=1 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val=No 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.401 18:23:32 -- accel/accel.sh@21 -- # val= 00:07:25.401 18:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.401 18:23:32 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 ************************************ 00:07:26.772 END TEST accel_dif_verify 00:07:26.772 ************************************ 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@21 -- # val= 00:07:26.772 18:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.772 18:23:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.772 18:23:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.772 18:23:33 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:26.772 18:23:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.772 00:07:26.772 real 0m2.931s 00:07:26.772 user 0m2.495s 00:07:26.772 sys 0m0.233s 00:07:26.772 18:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.772 18:23:33 -- common/autotest_common.sh@10 -- # set +x 00:07:26.772 18:23:33 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:26.772 18:23:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:26.772 18:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.772 18:23:33 -- common/autotest_common.sh@10 -- # set +x 00:07:26.772 ************************************ 00:07:26.772 START TEST accel_dif_generate 00:07:26.772 ************************************ 00:07:26.772 18:23:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:26.772 18:23:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.772 18:23:33 -- accel/accel.sh@17 -- # local accel_module 00:07:26.772 18:23:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:26.772 18:23:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.772 18:23:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.772 18:23:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.772 18:23:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.772 18:23:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.772 18:23:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.772 18:23:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.772 18:23:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.772 18:23:33 -- accel/accel.sh@42 -- # jq -r . 00:07:26.772 [2024-07-14 18:23:33.920035] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.772 [2024-07-14 18:23:33.920131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:07:26.772 [2024-07-14 18:23:34.057436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.772 [2024-07-14 18:23:34.151381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.154 18:23:35 -- accel/accel.sh@18 -- # out=' 00:07:28.154 SPDK Configuration: 00:07:28.154 Core mask: 0x1 00:07:28.155 00:07:28.155 Accel Perf Configuration: 00:07:28.155 Workload Type: dif_generate 00:07:28.155 Vector size: 4096 bytes 00:07:28.155 Transfer size: 4096 bytes 00:07:28.155 Block size: 512 bytes 00:07:28.155 Metadata size: 8 bytes 00:07:28.155 Vector count 1 00:07:28.155 Module: software 00:07:28.155 Queue depth: 32 00:07:28.155 Allocate depth: 32 00:07:28.155 # threads/core: 1 00:07:28.155 Run time: 1 seconds 00:07:28.155 Verify: No 00:07:28.155 00:07:28.155 Running for 1 seconds... 00:07:28.155 00:07:28.155 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.155 ------------------------------------------------------------------------------------ 00:07:28.155 0,0 120224/s 476 MiB/s 0 0 00:07:28.155 ==================================================================================== 00:07:28.155 Total 120224/s 469 MiB/s 0 0' 00:07:28.155 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.155 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.155 18:23:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:28.155 18:23:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.155 18:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.155 18:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.155 18:23:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.155 18:23:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.155 18:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.155 18:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.155 18:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.155 18:23:35 -- accel/accel.sh@42 -- # jq -r . 00:07:28.155 [2024-07-14 18:23:35.384304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.155 [2024-07-14 18:23:35.384399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71069 ] 00:07:28.155 [2024-07-14 18:23:35.524707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.413 [2024-07-14 18:23:35.622570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=0x1 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=dif_generate 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=software 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=32 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=32 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val=1 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 18:23:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.413 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.413 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.414 18:23:35 -- accel/accel.sh@21 -- # val=No 00:07:28.414 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.414 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.414 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.414 18:23:35 -- accel/accel.sh@21 -- # val= 00:07:28.414 18:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.414 18:23:35 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 ************************************ 00:07:29.789 END TEST accel_dif_generate 00:07:29.789 ************************************ 00:07:29.789 18:23:36 -- accel/accel.sh@21 -- # val= 00:07:29.789 18:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.789 18:23:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.789 18:23:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.789 18:23:36 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:29.789 18:23:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.789 00:07:29.789 real 0m2.935s 00:07:29.789 user 0m2.501s 00:07:29.789 sys 0m0.229s 00:07:29.789 18:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.789 18:23:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.789 18:23:36 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:29.789 18:23:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:29.789 18:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.789 18:23:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.789 ************************************ 00:07:29.789 START TEST accel_dif_generate_copy 00:07:29.789 ************************************ 00:07:29.789 18:23:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:29.789 18:23:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.789 18:23:36 -- accel/accel.sh@17 -- # local accel_module 00:07:29.789 18:23:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:29.789 18:23:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:29.789 18:23:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.789 18:23:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.789 18:23:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.789 18:23:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.789 18:23:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.789 18:23:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.789 18:23:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.789 18:23:36 -- accel/accel.sh@42 -- # jq -r . 00:07:29.789 [2024-07-14 18:23:36.908880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.789 [2024-07-14 18:23:36.909001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71098 ] 00:07:29.789 [2024-07-14 18:23:37.053985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.789 [2024-07-14 18:23:37.145724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.173 18:23:38 -- accel/accel.sh@18 -- # out=' 00:07:31.173 SPDK Configuration: 00:07:31.173 Core mask: 0x1 00:07:31.173 00:07:31.173 Accel Perf Configuration: 00:07:31.173 Workload Type: dif_generate_copy 00:07:31.173 Vector size: 4096 bytes 00:07:31.173 Transfer size: 4096 bytes 00:07:31.173 Vector count 1 00:07:31.173 Module: software 00:07:31.173 Queue depth: 32 00:07:31.173 Allocate depth: 32 00:07:31.173 # threads/core: 1 00:07:31.173 Run time: 1 seconds 00:07:31.173 Verify: No 00:07:31.173 00:07:31.173 Running for 1 seconds... 00:07:31.173 00:07:31.173 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.173 ------------------------------------------------------------------------------------ 00:07:31.173 0,0 91552/s 363 MiB/s 0 0 00:07:31.173 ==================================================================================== 00:07:31.173 Total 91552/s 357 MiB/s 0 0' 00:07:31.173 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.173 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.173 18:23:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:31.173 18:23:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:31.173 18:23:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.173 18:23:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.173 18:23:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.173 18:23:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.173 18:23:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.173 18:23:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.173 18:23:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.173 18:23:38 -- accel/accel.sh@42 -- # jq -r . 00:07:31.173 [2024-07-14 18:23:38.375140] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:31.173 [2024-07-14 18:23:38.375270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:07:31.173 [2024-07-14 18:23:38.513373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.434 [2024-07-14 18:23:38.609758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val=0x1 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:31.434 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.434 18:23:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.434 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.434 18:23:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val=software 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val=32 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val=32 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val=1 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val=No 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.435 18:23:38 -- accel/accel.sh@21 -- # val= 00:07:31.435 18:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.435 18:23:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 ************************************ 00:07:32.810 END TEST accel_dif_generate_copy 00:07:32.810 ************************************ 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@21 -- # val= 00:07:32.810 18:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.810 18:23:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.810 18:23:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.810 18:23:39 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:32.810 18:23:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.810 00:07:32.810 real 0m2.934s 00:07:32.810 user 0m2.511s 00:07:32.810 sys 0m0.216s 00:07:32.810 18:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.810 18:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.810 18:23:39 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:32.810 18:23:39 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.810 18:23:39 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:32.810 18:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.810 18:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.810 ************************************ 00:07:32.810 START TEST accel_comp 00:07:32.810 ************************************ 00:07:32.810 18:23:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.810 18:23:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.810 18:23:39 -- accel/accel.sh@17 -- # local accel_module 00:07:32.810 18:23:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.810 18:23:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.810 18:23:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.810 18:23:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.810 18:23:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.810 18:23:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.810 18:23:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.810 18:23:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.810 18:23:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.810 18:23:39 -- accel/accel.sh@42 -- # jq -r . 00:07:32.810 [2024-07-14 18:23:39.894995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.810 [2024-07-14 18:23:39.895668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71152 ] 00:07:32.810 [2024-07-14 18:23:40.038570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.810 [2024-07-14 18:23:40.132309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.186 18:23:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.186 00:07:34.186 SPDK Configuration: 00:07:34.186 Core mask: 0x1 00:07:34.186 00:07:34.186 Accel Perf Configuration: 00:07:34.186 Workload Type: compress 00:07:34.186 Transfer size: 4096 bytes 00:07:34.186 Vector count 1 00:07:34.186 Module: software 00:07:34.186 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.186 Queue depth: 32 00:07:34.186 Allocate depth: 32 00:07:34.186 # threads/core: 1 00:07:34.186 Run time: 1 seconds 00:07:34.186 Verify: No 00:07:34.186 00:07:34.186 Running for 1 seconds... 00:07:34.186 00:07:34.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.186 ------------------------------------------------------------------------------------ 00:07:34.186 0,0 46496/s 193 MiB/s 0 0 00:07:34.186 ==================================================================================== 00:07:34.186 Total 46496/s 181 MiB/s 0 0' 00:07:34.186 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.186 18:23:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.186 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.186 18:23:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.186 18:23:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.186 18:23:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.186 18:23:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.186 18:23:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.186 18:23:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.186 18:23:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.186 18:23:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.186 18:23:41 -- accel/accel.sh@42 -- # jq -r . 00:07:34.186 [2024-07-14 18:23:41.365630] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.186 [2024-07-14 18:23:41.365737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71172 ] 00:07:34.186 [2024-07-14 18:23:41.501164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.186 [2024-07-14 18:23:41.593024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.444 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.444 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.444 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.444 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.444 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.444 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.444 18:23:41 -- accel/accel.sh@21 -- # val=0x1 00:07:34.444 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.444 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=compress 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=software 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=32 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=32 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=1 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val=No 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.445 18:23:41 -- accel/accel.sh@21 -- # val= 00:07:34.445 18:23:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.445 18:23:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@21 -- # val= 00:07:35.407 18:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 18:23:42 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 18:23:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.407 18:23:42 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:35.407 18:23:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.407 00:07:35.407 real 0m2.951s 00:07:35.407 user 0m2.504s 00:07:35.407 sys 0m0.241s 00:07:35.407 18:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.407 18:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.407 ************************************ 00:07:35.407 END TEST accel_comp 00:07:35.407 ************************************ 00:07:35.666 18:23:42 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.666 18:23:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:35.666 18:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.666 18:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.666 ************************************ 00:07:35.666 START TEST accel_decomp 00:07:35.666 ************************************ 00:07:35.666 18:23:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.666 18:23:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.666 18:23:42 -- accel/accel.sh@17 -- # local accel_module 00:07:35.666 18:23:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.666 18:23:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.666 18:23:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.666 18:23:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.666 18:23:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.666 18:23:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.666 18:23:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.666 18:23:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.666 18:23:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.666 18:23:42 -- accel/accel.sh@42 -- # jq -r . 00:07:35.666 [2024-07-14 18:23:42.894692] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.666 [2024-07-14 18:23:42.894794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71206 ] 00:07:35.666 [2024-07-14 18:23:43.030245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.923 [2024-07-14 18:23:43.124174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.298 18:23:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.298 00:07:37.298 SPDK Configuration: 00:07:37.298 Core mask: 0x1 00:07:37.298 00:07:37.298 Accel Perf Configuration: 00:07:37.298 Workload Type: decompress 00:07:37.298 Transfer size: 4096 bytes 00:07:37.298 Vector count 1 00:07:37.298 Module: software 00:07:37.298 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.298 Queue depth: 32 00:07:37.298 Allocate depth: 32 00:07:37.298 # threads/core: 1 00:07:37.298 Run time: 1 seconds 00:07:37.298 Verify: Yes 00:07:37.298 00:07:37.298 Running for 1 seconds... 00:07:37.298 00:07:37.298 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.298 ------------------------------------------------------------------------------------ 00:07:37.298 0,0 65824/s 121 MiB/s 0 0 00:07:37.298 ==================================================================================== 00:07:37.298 Total 65824/s 257 MiB/s 0 0' 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.298 18:23:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.298 18:23:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.298 18:23:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.298 18:23:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.298 18:23:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.298 18:23:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.298 18:23:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.298 18:23:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.298 18:23:44 -- accel/accel.sh@42 -- # jq -r . 00:07:37.298 [2024-07-14 18:23:44.361308] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.298 [2024-07-14 18:23:44.361413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71226 ] 00:07:37.298 [2024-07-14 18:23:44.499448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.298 [2024-07-14 18:23:44.592047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=0x1 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=decompress 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=software 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=32 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=32 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=1 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val=Yes 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:37.298 18:23:44 -- accel/accel.sh@21 -- # val= 00:07:37.298 18:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # IFS=: 00:07:37.298 18:23:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@21 -- # val= 00:07:38.675 18:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.675 18:23:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.675 18:23:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.675 18:23:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.675 18:23:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.675 00:07:38.675 real 0m2.937s 00:07:38.675 user 0m2.515s 00:07:38.675 sys 0m0.216s 00:07:38.675 18:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.675 18:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 ************************************ 00:07:38.675 END TEST accel_decomp 00:07:38.675 ************************************ 00:07:38.675 18:23:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.675 18:23:45 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:38.675 18:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.675 18:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 ************************************ 00:07:38.675 START TEST accel_decmop_full 00:07:38.675 ************************************ 00:07:38.675 18:23:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.675 18:23:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.675 18:23:45 -- accel/accel.sh@17 -- # local accel_module 00:07:38.675 18:23:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.675 18:23:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.675 18:23:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.675 18:23:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.675 18:23:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.675 18:23:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.675 18:23:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.675 18:23:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.675 18:23:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.675 18:23:45 -- accel/accel.sh@42 -- # jq -r . 00:07:38.675 [2024-07-14 18:23:45.879141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:38.675 [2024-07-14 18:23:45.879249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71255 ] 00:07:38.675 [2024-07-14 18:23:46.026422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.934 [2024-07-14 18:23:46.118688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.313 18:23:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.313 00:07:40.313 SPDK Configuration: 00:07:40.313 Core mask: 0x1 00:07:40.313 00:07:40.313 Accel Perf Configuration: 00:07:40.313 Workload Type: decompress 00:07:40.313 Transfer size: 111250 bytes 00:07:40.313 Vector count 1 00:07:40.313 Module: software 00:07:40.313 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.313 Queue depth: 32 00:07:40.313 Allocate depth: 32 00:07:40.313 # threads/core: 1 00:07:40.313 Run time: 1 seconds 00:07:40.313 Verify: Yes 00:07:40.313 00:07:40.313 Running for 1 seconds... 00:07:40.313 00:07:40.313 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.313 ------------------------------------------------------------------------------------ 00:07:40.313 0,0 4512/s 186 MiB/s 0 0 00:07:40.313 ==================================================================================== 00:07:40.313 Total 4512/s 478 MiB/s 0 0' 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.313 18:23:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.313 18:23:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.313 18:23:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.313 18:23:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.313 18:23:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.313 18:23:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.313 18:23:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.313 18:23:47 -- accel/accel.sh@42 -- # jq -r . 00:07:40.313 [2024-07-14 18:23:47.362190] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.313 [2024-07-14 18:23:47.362294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71280 ] 00:07:40.313 [2024-07-14 18:23:47.500237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.313 [2024-07-14 18:23:47.592869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=0x1 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=decompress 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=software 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=32 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=32 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=1 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.313 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.313 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.313 18:23:47 -- accel/accel.sh@21 -- # val=Yes 00:07:40.314 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.314 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.314 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.314 18:23:47 -- accel/accel.sh@21 -- # val= 00:07:40.314 18:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.314 18:23:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@21 -- # val= 00:07:41.692 18:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # IFS=: 00:07:41.692 18:23:48 -- accel/accel.sh@20 -- # read -r var val 00:07:41.692 18:23:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.692 18:23:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.692 18:23:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.692 00:07:41.692 real 0m2.959s 00:07:41.692 user 0m2.534s 00:07:41.692 sys 0m0.219s 00:07:41.692 18:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.692 18:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.692 ************************************ 00:07:41.692 END TEST accel_decmop_full 00:07:41.692 ************************************ 00:07:41.692 18:23:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.692 18:23:48 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:41.692 18:23:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.692 18:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.692 ************************************ 00:07:41.692 START TEST accel_decomp_mcore 00:07:41.692 ************************************ 00:07:41.692 18:23:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.692 18:23:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.692 18:23:48 -- accel/accel.sh@17 -- # local accel_module 00:07:41.692 18:23:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.692 18:23:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.692 18:23:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.692 18:23:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.692 18:23:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.692 18:23:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.692 18:23:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.692 18:23:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.692 18:23:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.692 18:23:48 -- accel/accel.sh@42 -- # jq -r . 00:07:41.692 [2024-07-14 18:23:48.888611] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.692 [2024-07-14 18:23:48.888722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:07:41.692 [2024-07-14 18:23:49.031994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.951 [2024-07-14 18:23:49.129019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.951 [2024-07-14 18:23:49.129151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.951 [2024-07-14 18:23:49.129838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.951 [2024-07-14 18:23:49.129858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.365 18:23:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.365 00:07:43.365 SPDK Configuration: 00:07:43.365 Core mask: 0xf 00:07:43.365 00:07:43.365 Accel Perf Configuration: 00:07:43.365 Workload Type: decompress 00:07:43.365 Transfer size: 4096 bytes 00:07:43.365 Vector count 1 00:07:43.365 Module: software 00:07:43.365 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.365 Queue depth: 32 00:07:43.365 Allocate depth: 32 00:07:43.365 # threads/core: 1 00:07:43.365 Run time: 1 seconds 00:07:43.365 Verify: Yes 00:07:43.365 00:07:43.365 Running for 1 seconds... 00:07:43.365 00:07:43.365 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.365 ------------------------------------------------------------------------------------ 00:07:43.365 0,0 60096/s 110 MiB/s 0 0 00:07:43.365 3,0 57024/s 105 MiB/s 0 0 00:07:43.365 2,0 59424/s 109 MiB/s 0 0 00:07:43.365 1,0 59488/s 109 MiB/s 0 0 00:07:43.365 ==================================================================================== 00:07:43.365 Total 236032/s 922 MiB/s 0 0' 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.365 18:23:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.365 18:23:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.365 18:23:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.365 18:23:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.365 18:23:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.365 18:23:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.365 18:23:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.365 18:23:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.365 18:23:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.365 18:23:50 -- accel/accel.sh@42 -- # jq -r . 00:07:43.365 [2024-07-14 18:23:50.385286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.365 [2024-07-14 18:23:50.385364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71337 ] 00:07:43.365 [2024-07-14 18:23:50.518819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.365 [2024-07-14 18:23:50.615994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.365 [2024-07-14 18:23:50.616164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.365 [2024-07-14 18:23:50.616271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.365 [2024-07-14 18:23:50.616271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.365 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.365 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.365 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.365 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.365 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.365 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.365 18:23:50 -- accel/accel.sh@21 -- # val=0xf 00:07:43.365 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.365 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.365 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.365 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=decompress 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=software 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=32 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=32 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=1 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val=Yes 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.366 18:23:50 -- accel/accel.sh@21 -- # val= 00:07:43.366 18:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.366 18:23:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@21 -- # val= 00:07:44.737 18:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # IFS=: 00:07:44.737 18:23:51 -- accel/accel.sh@20 -- # read -r var val 00:07:44.737 18:23:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.737 18:23:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.737 18:23:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.737 00:07:44.737 real 0m2.972s 00:07:44.737 user 0m9.379s 00:07:44.737 sys 0m0.235s 00:07:44.737 18:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.737 18:23:51 -- common/autotest_common.sh@10 -- # set +x 00:07:44.737 ************************************ 00:07:44.737 END TEST accel_decomp_mcore 00:07:44.737 ************************************ 00:07:44.737 18:23:51 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.737 18:23:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:44.737 18:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.737 18:23:51 -- common/autotest_common.sh@10 -- # set +x 00:07:44.737 ************************************ 00:07:44.737 START TEST accel_decomp_full_mcore 00:07:44.737 ************************************ 00:07:44.737 18:23:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.737 18:23:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.737 18:23:51 -- accel/accel.sh@17 -- # local accel_module 00:07:44.737 18:23:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.738 18:23:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.738 18:23:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.738 18:23:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.738 18:23:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.738 18:23:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.738 18:23:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.738 18:23:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.738 18:23:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.738 18:23:51 -- accel/accel.sh@42 -- # jq -r . 00:07:44.738 [2024-07-14 18:23:51.906174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.738 [2024-07-14 18:23:51.906265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71369 ] 00:07:44.738 [2024-07-14 18:23:52.044581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.738 [2024-07-14 18:23:52.147100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.738 [2024-07-14 18:23:52.147228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.738 [2024-07-14 18:23:52.147359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.738 [2024-07-14 18:23:52.147359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.111 18:23:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.111 00:07:46.111 SPDK Configuration: 00:07:46.111 Core mask: 0xf 00:07:46.111 00:07:46.111 Accel Perf Configuration: 00:07:46.111 Workload Type: decompress 00:07:46.111 Transfer size: 111250 bytes 00:07:46.111 Vector count 1 00:07:46.111 Module: software 00:07:46.111 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.111 Queue depth: 32 00:07:46.111 Allocate depth: 32 00:07:46.111 # threads/core: 1 00:07:46.111 Run time: 1 seconds 00:07:46.111 Verify: Yes 00:07:46.111 00:07:46.111 Running for 1 seconds... 00:07:46.111 00:07:46.111 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.111 ------------------------------------------------------------------------------------ 00:07:46.111 0,0 4448/s 183 MiB/s 0 0 00:07:46.111 3,0 4320/s 178 MiB/s 0 0 00:07:46.111 2,0 4448/s 183 MiB/s 0 0 00:07:46.111 1,0 4448/s 183 MiB/s 0 0 00:07:46.111 ==================================================================================== 00:07:46.111 Total 17664/s 1874 MiB/s 0 0' 00:07:46.111 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.111 18:23:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.111 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.111 18:23:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.111 18:23:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.111 18:23:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.111 18:23:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.111 18:23:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.111 18:23:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.111 18:23:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.111 18:23:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.111 18:23:53 -- accel/accel.sh@42 -- # jq -r . 00:07:46.111 [2024-07-14 18:23:53.402973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:46.111 [2024-07-14 18:23:53.403064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:07:46.369 [2024-07-14 18:23:53.537699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.369 [2024-07-14 18:23:53.635106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.369 [2024-07-14 18:23:53.635223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.369 [2024-07-14 18:23:53.635339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.369 [2024-07-14 18:23:53.635343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=0xf 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=decompress 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=software 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=32 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=32 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=1 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val=Yes 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:46.369 18:23:53 -- accel/accel.sh@21 -- # val= 00:07:46.369 18:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # IFS=: 00:07:46.369 18:23:53 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@21 -- # val= 00:07:47.742 18:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.742 18:23:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.742 18:23:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.742 18:23:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.742 18:23:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.742 ************************************ 00:07:47.742 END TEST accel_decomp_full_mcore 00:07:47.742 ************************************ 00:07:47.742 00:07:47.742 real 0m2.990s 00:07:47.742 user 0m9.398s 00:07:47.742 sys 0m0.257s 00:07:47.742 18:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.742 18:23:54 -- common/autotest_common.sh@10 -- # set +x 00:07:47.742 18:23:54 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.742 18:23:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:47.742 18:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.742 18:23:54 -- common/autotest_common.sh@10 -- # set +x 00:07:47.742 ************************************ 00:07:47.742 START TEST accel_decomp_mthread 00:07:47.742 ************************************ 00:07:47.742 18:23:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.742 18:23:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.742 18:23:54 -- accel/accel.sh@17 -- # local accel_module 00:07:47.742 18:23:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.742 18:23:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.742 18:23:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.742 18:23:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.743 18:23:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.743 18:23:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.743 18:23:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.743 18:23:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.743 18:23:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.743 18:23:54 -- accel/accel.sh@42 -- # jq -r . 00:07:47.743 [2024-07-14 18:23:54.943108] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.743 [2024-07-14 18:23:54.943688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71429 ] 00:07:47.743 [2024-07-14 18:23:55.079139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.000 [2024-07-14 18:23:55.174087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.374 18:23:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:49.374 00:07:49.374 SPDK Configuration: 00:07:49.374 Core mask: 0x1 00:07:49.374 00:07:49.374 Accel Perf Configuration: 00:07:49.374 Workload Type: decompress 00:07:49.374 Transfer size: 4096 bytes 00:07:49.374 Vector count 1 00:07:49.374 Module: software 00:07:49.374 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.374 Queue depth: 32 00:07:49.374 Allocate depth: 32 00:07:49.374 # threads/core: 2 00:07:49.374 Run time: 1 seconds 00:07:49.374 Verify: Yes 00:07:49.374 00:07:49.374 Running for 1 seconds... 00:07:49.374 00:07:49.374 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.374 ------------------------------------------------------------------------------------ 00:07:49.374 0,1 33888/s 62 MiB/s 0 0 00:07:49.374 0,0 33792/s 62 MiB/s 0 0 00:07:49.374 ==================================================================================== 00:07:49.374 Total 67680/s 264 MiB/s 0 0' 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.374 18:23:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.374 18:23:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.374 18:23:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.374 18:23:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.374 18:23:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.374 18:23:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.374 18:23:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.374 18:23:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.374 18:23:56 -- accel/accel.sh@42 -- # jq -r . 00:07:49.374 [2024-07-14 18:23:56.414811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:49.374 [2024-07-14 18:23:56.414912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71449 ] 00:07:49.374 [2024-07-14 18:23:56.551840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.374 [2024-07-14 18:23:56.652074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=0x1 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=decompress 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=software 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=32 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=32 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=2 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val=Yes 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.374 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.374 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.374 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:49.375 18:23:56 -- accel/accel.sh@21 -- # val= 00:07:49.375 18:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.375 18:23:56 -- accel/accel.sh@20 -- # IFS=: 00:07:49.375 18:23:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@21 -- # val= 00:07:50.751 18:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.751 18:23:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.751 18:23:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.751 18:23:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:50.751 18:23:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.751 00:07:50.751 real 0m2.954s 00:07:50.751 user 0m2.518s 00:07:50.751 sys 0m0.228s 00:07:50.751 18:23:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.751 18:23:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.751 ************************************ 00:07:50.751 END TEST accel_decomp_mthread 00:07:50.751 ************************************ 00:07:50.751 18:23:57 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.751 18:23:57 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:50.751 18:23:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.751 18:23:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.751 ************************************ 00:07:50.751 START TEST accel_deomp_full_mthread 00:07:50.751 ************************************ 00:07:50.751 18:23:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.751 18:23:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.751 18:23:57 -- accel/accel.sh@17 -- # local accel_module 00:07:50.751 18:23:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.751 18:23:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.751 18:23:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.751 18:23:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.751 18:23:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.751 18:23:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.751 18:23:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.751 18:23:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.751 18:23:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.751 18:23:57 -- accel/accel.sh@42 -- # jq -r . 00:07:50.751 [2024-07-14 18:23:57.945790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.751 [2024-07-14 18:23:57.945887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71483 ] 00:07:50.751 [2024-07-14 18:23:58.085577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.010 [2024-07-14 18:23:58.182746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.387 18:23:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:52.387 00:07:52.387 SPDK Configuration: 00:07:52.387 Core mask: 0x1 00:07:52.387 00:07:52.387 Accel Perf Configuration: 00:07:52.387 Workload Type: decompress 00:07:52.387 Transfer size: 111250 bytes 00:07:52.387 Vector count 1 00:07:52.387 Module: software 00:07:52.387 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.387 Queue depth: 32 00:07:52.387 Allocate depth: 32 00:07:52.387 # threads/core: 2 00:07:52.387 Run time: 1 seconds 00:07:52.387 Verify: Yes 00:07:52.387 00:07:52.387 Running for 1 seconds... 00:07:52.387 00:07:52.387 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.387 ------------------------------------------------------------------------------------ 00:07:52.387 0,1 2272/s 93 MiB/s 0 0 00:07:52.387 0,0 2240/s 92 MiB/s 0 0 00:07:52.387 ==================================================================================== 00:07:52.387 Total 4512/s 478 MiB/s 0 0' 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.387 18:23:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.387 18:23:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.387 18:23:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.387 18:23:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.387 18:23:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.387 18:23:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.387 18:23:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.387 18:23:59 -- accel/accel.sh@42 -- # jq -r . 00:07:52.387 [2024-07-14 18:23:59.453410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.387 [2024-07-14 18:23:59.453526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71503 ] 00:07:52.387 [2024-07-14 18:23:59.592429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.387 [2024-07-14 18:23:59.688565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=0x1 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=decompress 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=software 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=32 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=32 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=2 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val=Yes 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.387 18:23:59 -- accel/accel.sh@21 -- # val= 00:07:52.387 18:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.387 18:23:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@21 -- # val= 00:07:53.763 18:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.763 18:24:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.763 18:24:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.763 18:24:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:53.763 ************************************ 00:07:53.763 END TEST accel_deomp_full_mthread 00:07:53.763 ************************************ 00:07:53.763 18:24:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.763 00:07:53.763 real 0m3.014s 00:07:53.763 user 0m2.585s 00:07:53.763 sys 0m0.221s 00:07:53.763 18:24:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.763 18:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:53.763 18:24:00 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:53.763 18:24:00 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.763 18:24:00 -- accel/accel.sh@129 -- # build_accel_config 00:07:53.763 18:24:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.763 18:24:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:53.763 18:24:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.763 18:24:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.763 18:24:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.763 18:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:53.763 18:24:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.763 18:24:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.763 18:24:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.763 18:24:00 -- accel/accel.sh@42 -- # jq -r . 00:07:53.763 ************************************ 00:07:53.763 START TEST accel_dif_functional_tests 00:07:53.763 ************************************ 00:07:53.763 18:24:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.763 [2024-07-14 18:24:01.035115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.764 [2024-07-14 18:24:01.035210] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:07:53.764 [2024-07-14 18:24:01.170477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.028 [2024-07-14 18:24:01.269671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.028 [2024-07-14 18:24:01.269811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.028 [2024-07-14 18:24:01.269814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.028 00:07:54.028 00:07:54.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.028 http://cunit.sourceforge.net/ 00:07:54.028 00:07:54.028 00:07:54.028 Suite: accel_dif 00:07:54.028 Test: verify: DIF generated, GUARD check ...passed 00:07:54.028 Test: verify: DIF generated, APPTAG check ...passed 00:07:54.028 Test: verify: DIF generated, REFTAG check ...passed 00:07:54.028 Test: verify: DIF not generated, GUARD check ...passed 00:07:54.028 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 18:24:01.362423] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.028 [2024-07-14 18:24:01.362578] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.028 passed 00:07:54.028 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 18:24:01.362622] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.028 [2024-07-14 18:24:01.362666] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.028 [2024-07-14 18:24:01.362695] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.028 passed 00:07:54.028 Test: verify: APPTAG correct, APPTAG check ...[2024-07-14 18:24:01.362830] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.028 passed 00:07:54.028 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:54.028 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-14 18:24:01.362909] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:54.028 passed 00:07:54.028 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:54.028 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:54.028 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 18:24:01.363251] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:54.028 passed 00:07:54.028 Test: generate copy: DIF generated, GUARD check ...passed 00:07:54.028 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:54.028 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:54.028 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:54.028 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:54.028 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:54.028 Test: generate copy: iovecs-len validate ...passed 00:07:54.028 Test: generate copy: buffer alignment validate ...[2024-07-14 18:24:01.363825] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:54.028 passed 00:07:54.028 00:07:54.028 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.028 suites 1 1 n/a 0 0 00:07:54.028 tests 20 20 20 0 0 00:07:54.028 asserts 204 204 204 0 n/a 00:07:54.028 00:07:54.028 Elapsed time = 0.005 seconds 00:07:54.301 00:07:54.301 real 0m0.576s 00:07:54.301 user 0m0.767s 00:07:54.301 sys 0m0.154s 00:07:54.301 18:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.301 ************************************ 00:07:54.301 END TEST accel_dif_functional_tests 00:07:54.301 ************************************ 00:07:54.301 18:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.301 ************************************ 00:07:54.301 END TEST accel 00:07:54.301 ************************************ 00:07:54.301 00:07:54.301 real 1m3.190s 00:07:54.301 user 1m7.515s 00:07:54.301 sys 0m6.023s 00:07:54.301 18:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.301 18:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.301 18:24:01 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:54.301 18:24:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:54.301 18:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.301 18:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.301 ************************************ 00:07:54.301 START TEST accel_rpc 00:07:54.301 ************************************ 00:07:54.301 18:24:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:54.301 * Looking for test storage... 00:07:54.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:54.301 18:24:01 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.301 18:24:01 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71602 00:07:54.301 18:24:01 -- accel/accel_rpc.sh@15 -- # waitforlisten 71602 00:07:54.301 18:24:01 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.301 18:24:01 -- common/autotest_common.sh@819 -- # '[' -z 71602 ']' 00:07:54.301 18:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.301 18:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.301 18:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.301 18:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.301 18:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.559 [2024-07-14 18:24:01.776214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:54.559 [2024-07-14 18:24:01.776588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:07:54.559 [2024-07-14 18:24:01.908964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.818 [2024-07-14 18:24:02.004705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.818 [2024-07-14 18:24:02.005167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.385 18:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.385 18:24:02 -- common/autotest_common.sh@852 -- # return 0 00:07:55.385 18:24:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.385 18:24:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.385 18:24:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.385 18:24:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.386 18:24:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.386 18:24:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.386 18:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.386 18:24:02 -- common/autotest_common.sh@10 -- # set +x 00:07:55.386 ************************************ 00:07:55.386 START TEST accel_assign_opcode 00:07:55.386 ************************************ 00:07:55.386 18:24:02 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:55.386 18:24:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.386 18:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.386 18:24:02 -- common/autotest_common.sh@10 -- # set +x 00:07:55.386 [2024-07-14 18:24:02.769751] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.386 18:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.386 18:24:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.386 18:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.386 18:24:02 -- common/autotest_common.sh@10 -- # set +x 00:07:55.386 [2024-07-14 18:24:02.777771] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.386 18:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.386 18:24:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.386 18:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.386 18:24:02 -- common/autotest_common.sh@10 -- # set +x 00:07:55.644 18:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.644 18:24:03 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.644 18:24:03 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.644 18:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.644 18:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.644 18:24:03 -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.644 18:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.644 software 00:07:55.644 ************************************ 00:07:55.644 END TEST accel_assign_opcode 00:07:55.644 ************************************ 00:07:55.644 00:07:55.644 real 0m0.301s 00:07:55.644 user 0m0.054s 00:07:55.644 sys 0m0.013s 00:07:55.644 18:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.644 18:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.903 18:24:03 -- accel/accel_rpc.sh@55 -- # killprocess 71602 00:07:55.903 18:24:03 -- common/autotest_common.sh@926 -- # '[' -z 71602 ']' 00:07:55.903 18:24:03 -- common/autotest_common.sh@930 -- # kill -0 71602 00:07:55.903 18:24:03 -- common/autotest_common.sh@931 -- # uname 00:07:55.903 18:24:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.903 18:24:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71602 00:07:55.903 killing process with pid 71602 00:07:55.903 18:24:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:55.903 18:24:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:55.903 18:24:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71602' 00:07:55.903 18:24:03 -- common/autotest_common.sh@945 -- # kill 71602 00:07:55.903 18:24:03 -- common/autotest_common.sh@950 -- # wait 71602 00:07:56.161 00:07:56.161 real 0m1.847s 00:07:56.161 user 0m1.947s 00:07:56.161 sys 0m0.437s 00:07:56.161 18:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.161 ************************************ 00:07:56.161 END TEST accel_rpc 00:07:56.161 ************************************ 00:07:56.161 18:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.161 18:24:03 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.161 18:24:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.161 18:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.161 18:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.161 ************************************ 00:07:56.161 START TEST app_cmdline 00:07:56.161 ************************************ 00:07:56.161 18:24:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.419 * Looking for test storage... 00:07:56.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.419 18:24:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.419 18:24:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71712 00:07:56.419 18:24:03 -- app/cmdline.sh@18 -- # waitforlisten 71712 00:07:56.419 18:24:03 -- common/autotest_common.sh@819 -- # '[' -z 71712 ']' 00:07:56.419 18:24:03 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.419 18:24:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.419 18:24:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.419 18:24:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.419 18:24:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.419 18:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 [2024-07-14 18:24:03.694949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:56.419 [2024-07-14 18:24:03.695054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71712 ] 00:07:56.419 [2024-07-14 18:24:03.833645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.678 [2024-07-14 18:24:03.923160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.678 [2024-07-14 18:24:03.923320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.614 18:24:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.614 18:24:04 -- common/autotest_common.sh@852 -- # return 0 00:07:57.614 18:24:04 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:57.614 { 00:07:57.614 "fields": { 00:07:57.614 "commit": "4b94202c6", 00:07:57.614 "major": 24, 00:07:57.614 "minor": 1, 00:07:57.614 "patch": 1, 00:07:57.614 "suffix": "-pre" 00:07:57.614 }, 00:07:57.614 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:07:57.614 } 00:07:57.614 18:24:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.614 18:24:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.614 18:24:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.614 18:24:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.614 18:24:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.614 18:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.614 18:24:04 -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 18:24:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.614 18:24:04 -- app/cmdline.sh@26 -- # sort 00:07:57.614 18:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.614 18:24:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:57.614 18:24:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:57.614 18:24:05 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.614 18:24:05 -- common/autotest_common.sh@640 -- # local es=0 00:07:57.614 18:24:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.614 18:24:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.614 18:24:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.614 18:24:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.614 18:24:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.614 18:24:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.614 18:24:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.614 18:24:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.614 18:24:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:57.614 18:24:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.873 2024/07/14 18:24:05 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:57.873 request: 00:07:57.873 { 00:07:57.873 "method": "env_dpdk_get_mem_stats", 00:07:57.873 "params": {} 00:07:57.873 } 00:07:57.873 Got JSON-RPC error response 00:07:57.873 GoRPCClient: error on JSON-RPC call 00:07:57.873 18:24:05 -- common/autotest_common.sh@643 -- # es=1 00:07:57.873 18:24:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:57.873 18:24:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:57.873 18:24:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:57.873 18:24:05 -- app/cmdline.sh@1 -- # killprocess 71712 00:07:57.873 18:24:05 -- common/autotest_common.sh@926 -- # '[' -z 71712 ']' 00:07:57.873 18:24:05 -- common/autotest_common.sh@930 -- # kill -0 71712 00:07:57.873 18:24:05 -- common/autotest_common.sh@931 -- # uname 00:07:57.873 18:24:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.873 18:24:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71712 00:07:58.132 18:24:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.132 18:24:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.132 killing process with pid 71712 00:07:58.132 18:24:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71712' 00:07:58.132 18:24:05 -- common/autotest_common.sh@945 -- # kill 71712 00:07:58.132 18:24:05 -- common/autotest_common.sh@950 -- # wait 71712 00:07:58.391 00:07:58.391 real 0m2.157s 00:07:58.391 user 0m2.689s 00:07:58.391 sys 0m0.518s 00:07:58.391 18:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.391 18:24:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.391 ************************************ 00:07:58.391 END TEST app_cmdline 00:07:58.391 ************************************ 00:07:58.391 18:24:05 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.391 18:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.391 18:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.391 18:24:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.391 ************************************ 00:07:58.391 START TEST version 00:07:58.391 ************************************ 00:07:58.391 18:24:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.650 * Looking for test storage... 00:07:58.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.650 18:24:05 -- app/version.sh@17 -- # get_header_version major 00:07:58.650 18:24:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.650 18:24:05 -- app/version.sh@14 -- # cut -f2 00:07:58.650 18:24:05 -- app/version.sh@14 -- # tr -d '"' 00:07:58.650 18:24:05 -- app/version.sh@17 -- # major=24 00:07:58.650 18:24:05 -- app/version.sh@18 -- # get_header_version minor 00:07:58.650 18:24:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.650 18:24:05 -- app/version.sh@14 -- # cut -f2 00:07:58.650 18:24:05 -- app/version.sh@14 -- # tr -d '"' 00:07:58.650 18:24:05 -- app/version.sh@18 -- # minor=1 00:07:58.650 18:24:05 -- app/version.sh@19 -- # get_header_version patch 00:07:58.650 18:24:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.650 18:24:05 -- app/version.sh@14 -- # cut -f2 00:07:58.650 18:24:05 -- app/version.sh@14 -- # tr -d '"' 00:07:58.650 18:24:05 -- app/version.sh@19 -- # patch=1 00:07:58.650 18:24:05 -- app/version.sh@20 -- # get_header_version suffix 00:07:58.650 18:24:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.650 18:24:05 -- app/version.sh@14 -- # cut -f2 00:07:58.650 18:24:05 -- app/version.sh@14 -- # tr -d '"' 00:07:58.650 18:24:05 -- app/version.sh@20 -- # suffix=-pre 00:07:58.650 18:24:05 -- app/version.sh@22 -- # version=24.1 00:07:58.650 18:24:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:58.650 18:24:05 -- app/version.sh@25 -- # version=24.1.1 00:07:58.650 18:24:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:58.650 18:24:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:58.650 18:24:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:58.650 18:24:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:58.650 18:24:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:58.650 00:07:58.650 real 0m0.145s 00:07:58.650 user 0m0.080s 00:07:58.650 sys 0m0.100s 00:07:58.650 18:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.650 18:24:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.650 ************************************ 00:07:58.650 END TEST version 00:07:58.650 ************************************ 00:07:58.650 18:24:05 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@204 -- # uname -s 00:07:58.650 18:24:05 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:58.650 18:24:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:58.650 18:24:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:58.650 18:24:05 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:58.650 18:24:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.650 18:24:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.650 18:24:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:58.650 18:24:05 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:58.650 18:24:05 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.650 18:24:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.650 18:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.650 18:24:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.650 ************************************ 00:07:58.650 START TEST nvmf_tcp 00:07:58.650 ************************************ 00:07:58.650 18:24:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.650 * Looking for test storage... 00:07:58.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.909 18:24:06 -- nvmf/common.sh@7 -- # uname -s 00:07:58.909 18:24:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.909 18:24:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.909 18:24:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.909 18:24:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.909 18:24:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.909 18:24:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.909 18:24:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.909 18:24:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.909 18:24:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.909 18:24:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.909 18:24:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:07:58.909 18:24:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:07:58.909 18:24:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.909 18:24:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.909 18:24:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.909 18:24:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.909 18:24:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.909 18:24:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.909 18:24:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.909 18:24:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.909 18:24:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.909 18:24:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.909 18:24:06 -- paths/export.sh@5 -- # export PATH 00:07:58.909 18:24:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.909 18:24:06 -- nvmf/common.sh@46 -- # : 0 00:07:58.909 18:24:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.909 18:24:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.909 18:24:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.909 18:24:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.909 18:24:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.909 18:24:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.909 18:24:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.909 18:24:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:58.909 18:24:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.909 18:24:06 -- common/autotest_common.sh@10 -- # set +x 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:58.909 18:24:06 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.909 18:24:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.909 18:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.909 18:24:06 -- common/autotest_common.sh@10 -- # set +x 00:07:58.909 ************************************ 00:07:58.909 START TEST nvmf_example 00:07:58.909 ************************************ 00:07:58.909 18:24:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.909 * Looking for test storage... 00:07:58.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.909 18:24:06 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.909 18:24:06 -- nvmf/common.sh@7 -- # uname -s 00:07:58.909 18:24:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.909 18:24:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.909 18:24:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.909 18:24:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.909 18:24:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.909 18:24:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.909 18:24:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.909 18:24:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.909 18:24:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.909 18:24:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.909 18:24:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:07:58.909 18:24:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:07:58.909 18:24:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.909 18:24:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.909 18:24:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.909 18:24:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.909 18:24:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.909 18:24:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.909 18:24:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.909 18:24:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.910 18:24:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.910 18:24:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.910 18:24:06 -- paths/export.sh@5 -- # export PATH 00:07:58.910 18:24:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.910 18:24:06 -- nvmf/common.sh@46 -- # : 0 00:07:58.910 18:24:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.910 18:24:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.910 18:24:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.910 18:24:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.910 18:24:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.910 18:24:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.910 18:24:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.910 18:24:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.910 18:24:06 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:58.910 18:24:06 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:58.910 18:24:06 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:58.910 18:24:06 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:58.910 18:24:06 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:58.910 18:24:06 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:58.910 18:24:06 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:58.910 18:24:06 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:58.910 18:24:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.910 18:24:06 -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 18:24:06 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:58.910 18:24:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:58.910 18:24:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.910 18:24:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:58.910 18:24:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:58.910 18:24:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:58.910 18:24:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.910 18:24:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.910 18:24:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.910 18:24:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:58.910 18:24:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:58.910 18:24:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:58.910 18:24:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:58.910 18:24:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:58.910 18:24:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:58.910 18:24:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.910 18:24:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.910 18:24:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:58.910 18:24:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:58.910 18:24:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.910 18:24:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.910 18:24:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.910 18:24:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.910 18:24:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.910 18:24:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.910 18:24:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.910 18:24:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.910 18:24:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:58.910 Cannot find device "nvmf_init_br" 00:07:58.910 18:24:06 -- nvmf/common.sh@153 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:58.910 Cannot find device "nvmf_tgt_br" 00:07:58.910 18:24:06 -- nvmf/common.sh@154 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.910 Cannot find device "nvmf_tgt_br2" 00:07:58.910 18:24:06 -- nvmf/common.sh@155 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:58.910 Cannot find device "nvmf_init_br" 00:07:58.910 18:24:06 -- nvmf/common.sh@156 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:58.910 Cannot find device "nvmf_tgt_br" 00:07:58.910 18:24:06 -- nvmf/common.sh@157 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:58.910 Cannot find device "nvmf_tgt_br2" 00:07:58.910 18:24:06 -- nvmf/common.sh@158 -- # true 00:07:58.910 18:24:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:59.168 Cannot find device "nvmf_br" 00:07:59.168 18:24:06 -- nvmf/common.sh@159 -- # true 00:07:59.168 18:24:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:59.168 Cannot find device "nvmf_init_if" 00:07:59.168 18:24:06 -- nvmf/common.sh@160 -- # true 00:07:59.168 18:24:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.168 18:24:06 -- nvmf/common.sh@161 -- # true 00:07:59.168 18:24:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.168 18:24:06 -- nvmf/common.sh@162 -- # true 00:07:59.168 18:24:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.168 18:24:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.168 18:24:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.168 18:24:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.168 18:24:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.168 18:24:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.168 18:24:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.168 18:24:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:59.168 18:24:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:59.168 18:24:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:59.168 18:24:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:59.168 18:24:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:59.168 18:24:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:59.168 18:24:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.168 18:24:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.168 18:24:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.168 18:24:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:59.168 18:24:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:59.168 18:24:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.427 18:24:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.427 18:24:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.427 18:24:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.427 18:24:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.427 18:24:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:59.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:07:59.427 00:07:59.427 --- 10.0.0.2 ping statistics --- 00:07:59.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.427 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:07:59.427 18:24:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:59.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:59.427 00:07:59.427 --- 10.0.0.3 ping statistics --- 00:07:59.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.427 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:59.427 18:24:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:59.427 00:07:59.427 --- 10.0.0.1 ping statistics --- 00:07:59.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.427 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:59.427 18:24:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.427 18:24:06 -- nvmf/common.sh@421 -- # return 0 00:07:59.427 18:24:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:59.427 18:24:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.427 18:24:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:59.427 18:24:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:59.427 18:24:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.427 18:24:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:59.427 18:24:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:59.427 18:24:06 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:59.427 18:24:06 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:59.427 18:24:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.427 18:24:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.427 18:24:06 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:59.427 18:24:06 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:59.427 18:24:06 -- target/nvmf_example.sh@34 -- # nvmfpid=72073 00:07:59.427 18:24:06 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:59.427 18:24:06 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.427 18:24:06 -- target/nvmf_example.sh@36 -- # waitforlisten 72073 00:07:59.427 18:24:06 -- common/autotest_common.sh@819 -- # '[' -z 72073 ']' 00:07:59.427 18:24:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.427 18:24:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.427 18:24:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.427 18:24:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.427 18:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:00.364 18:24:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.364 18:24:07 -- common/autotest_common.sh@852 -- # return 0 00:08:00.364 18:24:07 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:00.364 18:24:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:00.364 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.622 18:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.622 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.622 18:24:07 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:00.622 18:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.622 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.622 18:24:07 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:00.622 18:24:07 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.622 18:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.622 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.622 18:24:07 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:00.622 18:24:07 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.622 18:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.622 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.622 18:24:07 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.622 18:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.622 18:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 18:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.622 18:24:07 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:00.622 18:24:07 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:12.818 Initializing NVMe Controllers 00:08:12.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:12.818 Initialization complete. Launching workers. 00:08:12.818 ======================================================== 00:08:12.818 Latency(us) 00:08:12.818 Device Information : IOPS MiB/s Average min max 00:08:12.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15504.96 60.57 4128.53 791.01 21929.14 00:08:12.818 ======================================================== 00:08:12.818 Total : 15504.96 60.57 4128.53 791.01 21929.14 00:08:12.818 00:08:12.819 18:24:18 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:12.819 18:24:18 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:12.819 18:24:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:12.819 18:24:18 -- nvmf/common.sh@116 -- # sync 00:08:12.819 18:24:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:12.819 18:24:18 -- nvmf/common.sh@119 -- # set +e 00:08:12.819 18:24:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:12.819 18:24:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:12.819 rmmod nvme_tcp 00:08:12.819 rmmod nvme_fabrics 00:08:12.819 rmmod nvme_keyring 00:08:12.819 18:24:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:12.819 18:24:18 -- nvmf/common.sh@123 -- # set -e 00:08:12.819 18:24:18 -- nvmf/common.sh@124 -- # return 0 00:08:12.819 18:24:18 -- nvmf/common.sh@477 -- # '[' -n 72073 ']' 00:08:12.819 18:24:18 -- nvmf/common.sh@478 -- # killprocess 72073 00:08:12.819 18:24:18 -- common/autotest_common.sh@926 -- # '[' -z 72073 ']' 00:08:12.819 18:24:18 -- common/autotest_common.sh@930 -- # kill -0 72073 00:08:12.819 18:24:18 -- common/autotest_common.sh@931 -- # uname 00:08:12.819 18:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:12.819 18:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72073 00:08:12.819 18:24:18 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:12.819 18:24:18 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:12.819 killing process with pid 72073 00:08:12.819 18:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72073' 00:08:12.819 18:24:18 -- common/autotest_common.sh@945 -- # kill 72073 00:08:12.819 18:24:18 -- common/autotest_common.sh@950 -- # wait 72073 00:08:12.819 nvmf threads initialize successfully 00:08:12.819 bdev subsystem init successfully 00:08:12.819 created a nvmf target service 00:08:12.819 create targets's poll groups done 00:08:12.819 all subsystems of target started 00:08:12.819 nvmf target is running 00:08:12.819 all subsystems of target stopped 00:08:12.819 destroy targets's poll groups done 00:08:12.819 destroyed the nvmf target service 00:08:12.819 bdev subsystem finish successfully 00:08:12.819 nvmf threads destroy successfully 00:08:12.819 18:24:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:12.819 18:24:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:12.819 18:24:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:12.819 18:24:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.819 18:24:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:12.819 18:24:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.819 18:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.819 18:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.819 18:24:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:12.819 18:24:18 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:12.819 18:24:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:12.819 18:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.819 00:08:12.819 real 0m12.417s 00:08:12.819 user 0m44.574s 00:08:12.819 sys 0m1.947s 00:08:12.819 18:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.819 ************************************ 00:08:12.819 END TEST nvmf_example 00:08:12.819 ************************************ 00:08:12.819 18:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.819 18:24:18 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:12.819 18:24:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.819 18:24:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.819 18:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.819 ************************************ 00:08:12.819 START TEST nvmf_filesystem 00:08:12.819 ************************************ 00:08:12.819 18:24:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:12.819 * Looking for test storage... 00:08:12.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.819 18:24:18 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:12.819 18:24:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:12.819 18:24:18 -- common/autotest_common.sh@34 -- # set -e 00:08:12.819 18:24:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:12.819 18:24:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:12.819 18:24:18 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:12.819 18:24:18 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:12.819 18:24:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:12.819 18:24:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:12.819 18:24:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:12.819 18:24:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:12.819 18:24:18 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:12.819 18:24:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:12.819 18:24:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:12.819 18:24:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:12.819 18:24:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:12.819 18:24:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:12.819 18:24:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:12.819 18:24:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:12.819 18:24:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:12.819 18:24:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:12.819 18:24:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:12.819 18:24:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:12.819 18:24:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:12.819 18:24:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:12.819 18:24:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:12.819 18:24:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:12.819 18:24:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:12.819 18:24:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:12.819 18:24:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:12.819 18:24:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:12.819 18:24:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:12.819 18:24:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:12.819 18:24:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:12.819 18:24:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:12.819 18:24:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:12.819 18:24:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:12.819 18:24:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:12.819 18:24:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:12.819 18:24:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:12.819 18:24:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:12.819 18:24:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:12.819 18:24:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:12.819 18:24:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:12.819 18:24:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:12.819 18:24:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:12.819 18:24:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:12.819 18:24:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:12.819 18:24:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:12.819 18:24:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:12.819 18:24:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:12.819 18:24:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:12.819 18:24:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:12.819 18:24:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:12.819 18:24:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:12.819 18:24:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:12.819 18:24:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:12.820 18:24:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:12.820 18:24:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:12.820 18:24:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:12.820 18:24:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:12.820 18:24:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:12.820 18:24:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:12.820 18:24:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:12.820 18:24:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:12.820 18:24:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:12.820 18:24:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:12.820 18:24:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:12.820 18:24:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:12.820 18:24:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:12.820 18:24:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:12.820 18:24:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:12.820 18:24:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:12.820 18:24:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:12.820 18:24:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:12.820 18:24:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:12.820 18:24:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:12.820 18:24:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:12.820 18:24:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:12.820 18:24:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:12.820 18:24:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:12.820 18:24:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:12.820 18:24:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:12.820 18:24:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:12.820 18:24:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:12.820 18:24:18 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:12.820 18:24:18 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:12.820 18:24:18 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:12.820 18:24:18 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:12.820 18:24:18 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:12.820 18:24:18 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:12.820 18:24:18 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:12.820 18:24:18 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:12.820 18:24:18 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:12.820 18:24:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:12.820 18:24:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:12.820 18:24:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:12.820 18:24:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:12.820 18:24:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:12.820 18:24:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:12.820 18:24:18 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:12.820 18:24:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:12.820 #define SPDK_CONFIG_H 00:08:12.820 #define SPDK_CONFIG_APPS 1 00:08:12.820 #define SPDK_CONFIG_ARCH native 00:08:12.820 #undef SPDK_CONFIG_ASAN 00:08:12.820 #define SPDK_CONFIG_AVAHI 1 00:08:12.820 #undef SPDK_CONFIG_CET 00:08:12.820 #define SPDK_CONFIG_COVERAGE 1 00:08:12.820 #define SPDK_CONFIG_CROSS_PREFIX 00:08:12.820 #undef SPDK_CONFIG_CRYPTO 00:08:12.820 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:12.820 #undef SPDK_CONFIG_CUSTOMOCF 00:08:12.820 #undef SPDK_CONFIG_DAOS 00:08:12.820 #define SPDK_CONFIG_DAOS_DIR 00:08:12.820 #define SPDK_CONFIG_DEBUG 1 00:08:12.820 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:12.820 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:12.820 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:12.820 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:12.820 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:12.820 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:12.820 #define SPDK_CONFIG_EXAMPLES 1 00:08:12.820 #undef SPDK_CONFIG_FC 00:08:12.820 #define SPDK_CONFIG_FC_PATH 00:08:12.820 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:12.820 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:12.820 #undef SPDK_CONFIG_FUSE 00:08:12.820 #undef SPDK_CONFIG_FUZZER 00:08:12.820 #define SPDK_CONFIG_FUZZER_LIB 00:08:12.820 #define SPDK_CONFIG_GOLANG 1 00:08:12.820 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:12.820 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:12.820 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:12.820 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:12.820 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:12.820 #define SPDK_CONFIG_IDXD 1 00:08:12.820 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:12.820 #undef SPDK_CONFIG_IPSEC_MB 00:08:12.820 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:12.820 #define SPDK_CONFIG_ISAL 1 00:08:12.820 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:12.820 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:12.820 #define SPDK_CONFIG_LIBDIR 00:08:12.820 #undef SPDK_CONFIG_LTO 00:08:12.820 #define SPDK_CONFIG_MAX_LCORES 00:08:12.820 #define SPDK_CONFIG_NVME_CUSE 1 00:08:12.820 #undef SPDK_CONFIG_OCF 00:08:12.820 #define SPDK_CONFIG_OCF_PATH 00:08:12.820 #define SPDK_CONFIG_OPENSSL_PATH 00:08:12.820 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:12.820 #undef SPDK_CONFIG_PGO_USE 00:08:12.820 #define SPDK_CONFIG_PREFIX /usr/local 00:08:12.820 #undef SPDK_CONFIG_RAID5F 00:08:12.820 #undef SPDK_CONFIG_RBD 00:08:12.820 #define SPDK_CONFIG_RDMA 1 00:08:12.820 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:12.820 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:12.820 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:12.820 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:12.820 #define SPDK_CONFIG_SHARED 1 00:08:12.820 #undef SPDK_CONFIG_SMA 00:08:12.820 #define SPDK_CONFIG_TESTS 1 00:08:12.820 #undef SPDK_CONFIG_TSAN 00:08:12.820 #define SPDK_CONFIG_UBLK 1 00:08:12.820 #define SPDK_CONFIG_UBSAN 1 00:08:12.820 #undef SPDK_CONFIG_UNIT_TESTS 00:08:12.820 #undef SPDK_CONFIG_URING 00:08:12.820 #define SPDK_CONFIG_URING_PATH 00:08:12.820 #undef SPDK_CONFIG_URING_ZNS 00:08:12.820 #define SPDK_CONFIG_USDT 1 00:08:12.820 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:12.820 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:12.820 #undef SPDK_CONFIG_VFIO_USER 00:08:12.820 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:12.820 #define SPDK_CONFIG_VHOST 1 00:08:12.820 #define SPDK_CONFIG_VIRTIO 1 00:08:12.820 #undef SPDK_CONFIG_VTUNE 00:08:12.820 #define SPDK_CONFIG_VTUNE_DIR 00:08:12.820 #define SPDK_CONFIG_WERROR 1 00:08:12.820 #define SPDK_CONFIG_WPDK_DIR 00:08:12.820 #undef SPDK_CONFIG_XNVME 00:08:12.820 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:12.820 18:24:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:12.820 18:24:18 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.820 18:24:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.820 18:24:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.820 18:24:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.820 18:24:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.820 18:24:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.820 18:24:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.820 18:24:18 -- paths/export.sh@5 -- # export PATH 00:08:12.820 18:24:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.820 18:24:18 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:12.821 18:24:18 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:12.821 18:24:18 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:12.821 18:24:18 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:12.821 18:24:18 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:12.821 18:24:18 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:12.821 18:24:18 -- pm/common@16 -- # TEST_TAG=N/A 00:08:12.821 18:24:18 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:12.821 18:24:18 -- common/autotest_common.sh@52 -- # : 1 00:08:12.821 18:24:18 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:12.821 18:24:18 -- common/autotest_common.sh@56 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:12.821 18:24:18 -- common/autotest_common.sh@58 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:12.821 18:24:18 -- common/autotest_common.sh@60 -- # : 1 00:08:12.821 18:24:18 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:12.821 18:24:18 -- common/autotest_common.sh@62 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:12.821 18:24:18 -- common/autotest_common.sh@64 -- # : 00:08:12.821 18:24:18 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:12.821 18:24:18 -- common/autotest_common.sh@66 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:12.821 18:24:18 -- common/autotest_common.sh@68 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:12.821 18:24:18 -- common/autotest_common.sh@70 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:12.821 18:24:18 -- common/autotest_common.sh@72 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:12.821 18:24:18 -- common/autotest_common.sh@74 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:12.821 18:24:18 -- common/autotest_common.sh@76 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:12.821 18:24:18 -- common/autotest_common.sh@78 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:12.821 18:24:18 -- common/autotest_common.sh@80 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:12.821 18:24:18 -- common/autotest_common.sh@82 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:12.821 18:24:18 -- common/autotest_common.sh@84 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:12.821 18:24:18 -- common/autotest_common.sh@86 -- # : 1 00:08:12.821 18:24:18 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:12.821 18:24:18 -- common/autotest_common.sh@88 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:12.821 18:24:18 -- common/autotest_common.sh@90 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:12.821 18:24:18 -- common/autotest_common.sh@92 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:12.821 18:24:18 -- common/autotest_common.sh@94 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:12.821 18:24:18 -- common/autotest_common.sh@96 -- # : tcp 00:08:12.821 18:24:18 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:12.821 18:24:18 -- common/autotest_common.sh@98 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:12.821 18:24:18 -- common/autotest_common.sh@100 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:12.821 18:24:18 -- common/autotest_common.sh@102 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:12.821 18:24:18 -- common/autotest_common.sh@104 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:12.821 18:24:18 -- common/autotest_common.sh@106 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:12.821 18:24:18 -- common/autotest_common.sh@108 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:12.821 18:24:18 -- common/autotest_common.sh@110 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:12.821 18:24:18 -- common/autotest_common.sh@112 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:12.821 18:24:18 -- common/autotest_common.sh@114 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:12.821 18:24:18 -- common/autotest_common.sh@116 -- # : 1 00:08:12.821 18:24:18 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:12.821 18:24:18 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:12.821 18:24:18 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:12.821 18:24:18 -- common/autotest_common.sh@120 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:12.821 18:24:18 -- common/autotest_common.sh@122 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:12.821 18:24:18 -- common/autotest_common.sh@124 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:12.821 18:24:18 -- common/autotest_common.sh@126 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:12.821 18:24:18 -- common/autotest_common.sh@128 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:12.821 18:24:18 -- common/autotest_common.sh@130 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:12.821 18:24:18 -- common/autotest_common.sh@132 -- # : v23.11 00:08:12.821 18:24:18 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:12.821 18:24:18 -- common/autotest_common.sh@134 -- # : true 00:08:12.821 18:24:18 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:12.821 18:24:18 -- common/autotest_common.sh@136 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:12.821 18:24:18 -- common/autotest_common.sh@138 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:12.821 18:24:18 -- common/autotest_common.sh@140 -- # : 1 00:08:12.821 18:24:18 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:12.821 18:24:18 -- common/autotest_common.sh@142 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:12.821 18:24:18 -- common/autotest_common.sh@144 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:12.821 18:24:18 -- common/autotest_common.sh@146 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:12.821 18:24:18 -- common/autotest_common.sh@148 -- # : 00:08:12.821 18:24:18 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:12.821 18:24:18 -- common/autotest_common.sh@150 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:12.821 18:24:18 -- common/autotest_common.sh@152 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:12.821 18:24:18 -- common/autotest_common.sh@154 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:12.821 18:24:18 -- common/autotest_common.sh@156 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:12.821 18:24:18 -- common/autotest_common.sh@158 -- # : 0 00:08:12.821 18:24:18 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:12.822 18:24:18 -- common/autotest_common.sh@160 -- # : 0 00:08:12.822 18:24:18 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:12.822 18:24:18 -- common/autotest_common.sh@163 -- # : 00:08:12.822 18:24:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:12.822 18:24:18 -- common/autotest_common.sh@165 -- # : 1 00:08:12.822 18:24:18 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:12.822 18:24:18 -- common/autotest_common.sh@167 -- # : 1 00:08:12.822 18:24:18 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:12.822 18:24:18 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:12.822 18:24:18 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:12.822 18:24:18 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:12.822 18:24:18 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:12.822 18:24:18 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:12.822 18:24:18 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:12.822 18:24:18 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:12.822 18:24:18 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:12.822 18:24:18 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:12.822 18:24:18 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:12.822 18:24:18 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:12.822 18:24:18 -- common/autotest_common.sh@196 -- # cat 00:08:12.822 18:24:18 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:12.822 18:24:18 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:12.822 18:24:18 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:12.822 18:24:18 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:12.822 18:24:18 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:12.822 18:24:18 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:12.822 18:24:18 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:12.822 18:24:18 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:12.822 18:24:18 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:12.822 18:24:18 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:12.822 18:24:18 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:12.822 18:24:18 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:12.822 18:24:18 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:12.822 18:24:18 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:12.822 18:24:18 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:12.822 18:24:18 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:12.822 18:24:18 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:12.822 18:24:18 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:12.822 18:24:18 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:12.822 18:24:18 -- common/autotest_common.sh@249 -- # valgrind= 00:08:12.822 18:24:18 -- common/autotest_common.sh@255 -- # uname -s 00:08:12.822 18:24:18 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:12.822 18:24:18 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:12.822 18:24:18 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:12.822 18:24:18 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:12.822 18:24:18 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:08:12.822 18:24:18 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:12.822 18:24:18 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:12.822 18:24:18 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:12.822 18:24:18 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:12.822 18:24:18 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:12.822 18:24:18 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:12.822 18:24:18 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:12.822 18:24:18 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:12.822 18:24:18 -- common/autotest_common.sh@309 -- # [[ -z 72306 ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@309 -- # kill -0 72306 00:08:12.822 18:24:18 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:12.822 18:24:18 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:12.822 18:24:18 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:12.822 18:24:18 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:12.822 18:24:18 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:12.822 18:24:18 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:12.822 18:24:18 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:12.822 18:24:18 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.gG2bRr 00:08:12.822 18:24:18 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:12.822 18:24:18 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:12.822 18:24:18 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.gG2bRr/tests/target /tmp/spdk.gG2bRr 00:08:12.822 18:24:18 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:12.822 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.822 18:24:18 -- common/autotest_common.sh@318 -- # df -T 00:08:12.822 18:24:18 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:12.822 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:08:12.822 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=11997949952 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=5975920640 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=11997949952 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=5975920640 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267895808 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:08:12.823 18:24:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=93931917312 00:08:12.823 18:24:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:08:12.823 18:24:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=5770862592 00:08:12.823 18:24:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:12.823 18:24:18 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:12.823 * Looking for test storage... 00:08:12.823 18:24:18 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:12.823 18:24:18 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:12.823 18:24:18 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.823 18:24:18 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:12.823 18:24:18 -- common/autotest_common.sh@363 -- # mount=/home 00:08:12.823 18:24:18 -- common/autotest_common.sh@365 -- # target_space=11997949952 00:08:12.823 18:24:18 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:12.823 18:24:18 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:12.823 18:24:18 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:08:12.823 18:24:18 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:08:12.823 18:24:18 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:08:12.823 18:24:18 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.823 18:24:18 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.823 18:24:18 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.823 18:24:18 -- common/autotest_common.sh@380 -- # return 0 00:08:12.823 18:24:18 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:12.823 18:24:18 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:12.823 18:24:18 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:12.823 18:24:18 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:12.823 18:24:18 -- common/autotest_common.sh@1672 -- # true 00:08:12.823 18:24:18 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:12.823 18:24:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:12.823 18:24:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:12.823 18:24:18 -- common/autotest_common.sh@27 -- # exec 00:08:12.823 18:24:18 -- common/autotest_common.sh@29 -- # exec 00:08:12.823 18:24:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:12.823 18:24:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:12.823 18:24:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:12.823 18:24:18 -- common/autotest_common.sh@18 -- # set -x 00:08:12.823 18:24:18 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.823 18:24:18 -- nvmf/common.sh@7 -- # uname -s 00:08:12.823 18:24:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.823 18:24:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.823 18:24:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.823 18:24:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.823 18:24:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.823 18:24:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.823 18:24:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.823 18:24:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.823 18:24:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.823 18:24:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.823 18:24:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:08:12.823 18:24:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:08:12.823 18:24:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.823 18:24:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.823 18:24:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.824 18:24:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.824 18:24:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.824 18:24:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.824 18:24:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.824 18:24:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.824 18:24:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.824 18:24:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.824 18:24:18 -- paths/export.sh@5 -- # export PATH 00:08:12.824 18:24:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.824 18:24:18 -- nvmf/common.sh@46 -- # : 0 00:08:12.824 18:24:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:12.824 18:24:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:12.824 18:24:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:12.824 18:24:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.824 18:24:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.824 18:24:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:12.824 18:24:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:12.824 18:24:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:12.824 18:24:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:12.824 18:24:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:12.824 18:24:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:12.824 18:24:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:12.824 18:24:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.824 18:24:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:12.824 18:24:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:12.824 18:24:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:12.824 18:24:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.824 18:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.824 18:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.824 18:24:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:12.824 18:24:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:12.824 18:24:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:12.824 18:24:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:12.824 18:24:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:12.824 18:24:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:12.824 18:24:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.824 18:24:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.824 18:24:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.824 18:24:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:12.824 18:24:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.824 18:24:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.824 18:24:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.824 18:24:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.824 18:24:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.824 18:24:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.824 18:24:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.824 18:24:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.824 18:24:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:12.824 18:24:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:12.824 Cannot find device "nvmf_tgt_br" 00:08:12.824 18:24:18 -- nvmf/common.sh@154 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.824 Cannot find device "nvmf_tgt_br2" 00:08:12.824 18:24:18 -- nvmf/common.sh@155 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:12.824 18:24:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:12.824 Cannot find device "nvmf_tgt_br" 00:08:12.824 18:24:18 -- nvmf/common.sh@157 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:12.824 Cannot find device "nvmf_tgt_br2" 00:08:12.824 18:24:18 -- nvmf/common.sh@158 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:12.824 18:24:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:12.824 18:24:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.824 18:24:18 -- nvmf/common.sh@161 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.824 18:24:18 -- nvmf/common.sh@162 -- # true 00:08:12.824 18:24:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.824 18:24:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.825 18:24:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.825 18:24:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.825 18:24:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.825 18:24:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.825 18:24:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.825 18:24:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.825 18:24:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.825 18:24:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:12.825 18:24:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:12.825 18:24:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:12.825 18:24:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:12.825 18:24:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.825 18:24:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.825 18:24:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.825 18:24:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:12.825 18:24:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:12.825 18:24:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.825 18:24:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.825 18:24:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.825 18:24:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.825 18:24:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.825 18:24:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:12.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:12.825 00:08:12.825 --- 10.0.0.2 ping statistics --- 00:08:12.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.825 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:12.825 18:24:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:12.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:12.825 00:08:12.825 --- 10.0.0.3 ping statistics --- 00:08:12.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.825 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:12.825 18:24:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:12.825 00:08:12.825 --- 10.0.0.1 ping statistics --- 00:08:12.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.825 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:12.825 18:24:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.825 18:24:19 -- nvmf/common.sh@421 -- # return 0 00:08:12.825 18:24:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:12.825 18:24:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.825 18:24:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:12.825 18:24:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:12.825 18:24:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.825 18:24:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:12.825 18:24:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:12.825 18:24:19 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:12.825 18:24:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.825 18:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.825 18:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:12.825 ************************************ 00:08:12.825 START TEST nvmf_filesystem_no_in_capsule 00:08:12.825 ************************************ 00:08:12.825 18:24:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:12.825 18:24:19 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:12.825 18:24:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:12.825 18:24:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:12.825 18:24:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:12.825 18:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:12.825 18:24:19 -- nvmf/common.sh@469 -- # nvmfpid=72472 00:08:12.825 18:24:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.825 18:24:19 -- nvmf/common.sh@470 -- # waitforlisten 72472 00:08:12.825 18:24:19 -- common/autotest_common.sh@819 -- # '[' -z 72472 ']' 00:08:12.825 18:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.825 18:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:12.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.825 18:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.825 18:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:12.825 18:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:12.825 [2024-07-14 18:24:19.241371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:12.825 [2024-07-14 18:24:19.241515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.825 [2024-07-14 18:24:19.386647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.825 [2024-07-14 18:24:19.493959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.825 [2024-07-14 18:24:19.494151] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.825 [2024-07-14 18:24:19.494167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.825 [2024-07-14 18:24:19.494178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.825 [2024-07-14 18:24:19.494593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.825 [2024-07-14 18:24:19.494688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.825 [2024-07-14 18:24:19.495119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.825 [2024-07-14 18:24:19.495153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.083 18:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:13.083 18:24:20 -- common/autotest_common.sh@852 -- # return 0 00:08:13.083 18:24:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.083 18:24:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 18:24:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.083 18:24:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:13.083 18:24:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 [2024-07-14 18:24:20.290641] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.083 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.083 18:24:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.083 18:24:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.083 18:24:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.083 18:24:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 [2024-07-14 18:24:20.481109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.083 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.083 18:24:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:13.083 18:24:20 -- common/autotest_common.sh@1359 -- # local bs 00:08:13.083 18:24:20 -- common/autotest_common.sh@1360 -- # local nb 00:08:13.083 18:24:20 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:13.083 18:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.083 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 18:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.341 18:24:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:13.341 { 00:08:13.341 "aliases": [ 00:08:13.341 "60c3f085-0208-4bf6-86d6-5eac22537b49" 00:08:13.341 ], 00:08:13.341 "assigned_rate_limits": { 00:08:13.341 "r_mbytes_per_sec": 0, 00:08:13.341 "rw_ios_per_sec": 0, 00:08:13.341 "rw_mbytes_per_sec": 0, 00:08:13.341 "w_mbytes_per_sec": 0 00:08:13.341 }, 00:08:13.341 "block_size": 512, 00:08:13.341 "claim_type": "exclusive_write", 00:08:13.341 "claimed": true, 00:08:13.341 "driver_specific": {}, 00:08:13.341 "memory_domains": [ 00:08:13.341 { 00:08:13.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.341 "dma_device_type": 2 00:08:13.341 } 00:08:13.341 ], 00:08:13.341 "name": "Malloc1", 00:08:13.341 "num_blocks": 1048576, 00:08:13.341 "product_name": "Malloc disk", 00:08:13.341 "supported_io_types": { 00:08:13.341 "abort": true, 00:08:13.341 "compare": false, 00:08:13.341 "compare_and_write": false, 00:08:13.341 "flush": true, 00:08:13.341 "nvme_admin": false, 00:08:13.341 "nvme_io": false, 00:08:13.341 "read": true, 00:08:13.341 "reset": true, 00:08:13.341 "unmap": true, 00:08:13.341 "write": true, 00:08:13.341 "write_zeroes": true 00:08:13.341 }, 00:08:13.341 "uuid": "60c3f085-0208-4bf6-86d6-5eac22537b49", 00:08:13.341 "zoned": false 00:08:13.341 } 00:08:13.341 ]' 00:08:13.341 18:24:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:13.341 18:24:20 -- common/autotest_common.sh@1362 -- # bs=512 00:08:13.341 18:24:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:13.341 18:24:20 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:13.341 18:24:20 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:13.341 18:24:20 -- common/autotest_common.sh@1367 -- # echo 512 00:08:13.341 18:24:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:13.341 18:24:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.600 18:24:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.600 18:24:20 -- common/autotest_common.sh@1177 -- # local i=0 00:08:13.600 18:24:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.600 18:24:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:13.600 18:24:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:15.497 18:24:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:15.497 18:24:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:15.497 18:24:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.497 18:24:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:15.497 18:24:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.497 18:24:22 -- common/autotest_common.sh@1187 -- # return 0 00:08:15.497 18:24:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:15.497 18:24:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:15.497 18:24:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:15.497 18:24:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:15.497 18:24:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:15.497 18:24:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:15.497 18:24:22 -- setup/common.sh@80 -- # echo 536870912 00:08:15.497 18:24:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:15.497 18:24:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:15.497 18:24:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:15.497 18:24:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:15.497 18:24:22 -- target/filesystem.sh@69 -- # partprobe 00:08:15.755 18:24:22 -- target/filesystem.sh@70 -- # sleep 1 00:08:16.695 18:24:23 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:16.695 18:24:23 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.695 18:24:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:16.695 18:24:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.695 18:24:23 -- common/autotest_common.sh@10 -- # set +x 00:08:16.695 ************************************ 00:08:16.695 START TEST filesystem_ext4 00:08:16.695 ************************************ 00:08:16.695 18:24:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.695 18:24:23 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.695 18:24:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.695 18:24:23 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.695 18:24:23 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:16.695 18:24:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:16.695 18:24:23 -- common/autotest_common.sh@904 -- # local i=0 00:08:16.695 18:24:23 -- common/autotest_common.sh@905 -- # local force 00:08:16.695 18:24:23 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:16.695 18:24:23 -- common/autotest_common.sh@908 -- # force=-F 00:08:16.695 18:24:23 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.695 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.695 Discarding device blocks: 0/522240 done 00:08:16.695 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.695 Filesystem UUID: bb60cc11-434b-43ba-9d62-a15838f8912b 00:08:16.695 Superblock backups stored on blocks: 00:08:16.695 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.695 00:08:16.695 Allocating group tables: 0/64 done 00:08:16.695 Writing inode tables: 0/64 done 00:08:16.965 Creating journal (8192 blocks): done 00:08:16.965 Writing superblocks and filesystem accounting information: 0/64 done 00:08:16.965 00:08:16.965 18:24:24 -- common/autotest_common.sh@921 -- # return 0 00:08:16.965 18:24:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.965 18:24:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.965 18:24:24 -- target/filesystem.sh@25 -- # sync 00:08:16.965 18:24:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.965 18:24:24 -- target/filesystem.sh@27 -- # sync 00:08:16.965 18:24:24 -- target/filesystem.sh@29 -- # i=0 00:08:16.965 18:24:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.965 18:24:24 -- target/filesystem.sh@37 -- # kill -0 72472 00:08:16.965 18:24:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.965 18:24:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.224 18:24:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.224 18:24:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.224 00:08:17.224 real 0m0.419s 00:08:17.224 user 0m0.023s 00:08:17.224 sys 0m0.059s 00:08:17.224 18:24:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.224 18:24:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.224 ************************************ 00:08:17.224 END TEST filesystem_ext4 00:08:17.224 ************************************ 00:08:17.224 18:24:24 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.224 18:24:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.224 18:24:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.224 18:24:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.224 ************************************ 00:08:17.224 START TEST filesystem_btrfs 00:08:17.224 ************************************ 00:08:17.224 18:24:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.224 18:24:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.224 18:24:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.224 18:24:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.224 18:24:24 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:17.224 18:24:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.224 18:24:24 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.224 18:24:24 -- common/autotest_common.sh@905 -- # local force 00:08:17.224 18:24:24 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:17.224 18:24:24 -- common/autotest_common.sh@910 -- # force=-f 00:08:17.224 18:24:24 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.224 btrfs-progs v6.6.2 00:08:17.224 See https://btrfs.readthedocs.io for more information. 00:08:17.224 00:08:17.225 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.225 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.225 this does not affect your deployments: 00:08:17.225 - DUP for metadata (-m dup) 00:08:17.225 - enabled no-holes (-O no-holes) 00:08:17.225 - enabled free-space-tree (-R free-space-tree) 00:08:17.225 00:08:17.225 Label: (null) 00:08:17.225 UUID: 18fc93d6-e266-4bc6-ab3c-3b73584dfbcb 00:08:17.225 Node size: 16384 00:08:17.225 Sector size: 4096 00:08:17.225 Filesystem size: 510.00MiB 00:08:17.225 Block group profiles: 00:08:17.225 Data: single 8.00MiB 00:08:17.225 Metadata: DUP 32.00MiB 00:08:17.225 System: DUP 8.00MiB 00:08:17.225 SSD detected: yes 00:08:17.225 Zoned device: no 00:08:17.225 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.225 Runtime features: free-space-tree 00:08:17.225 Checksum: crc32c 00:08:17.225 Number of devices: 1 00:08:17.225 Devices: 00:08:17.225 ID SIZE PATH 00:08:17.225 1 510.00MiB /dev/nvme0n1p1 00:08:17.225 00:08:17.225 18:24:24 -- common/autotest_common.sh@921 -- # return 0 00:08:17.225 18:24:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.225 18:24:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.225 18:24:24 -- target/filesystem.sh@25 -- # sync 00:08:17.225 18:24:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.225 18:24:24 -- target/filesystem.sh@27 -- # sync 00:08:17.225 18:24:24 -- target/filesystem.sh@29 -- # i=0 00:08:17.225 18:24:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.483 18:24:24 -- target/filesystem.sh@37 -- # kill -0 72472 00:08:17.483 18:24:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.483 18:24:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.483 18:24:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.483 18:24:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.483 00:08:17.483 real 0m0.225s 00:08:17.483 user 0m0.023s 00:08:17.483 sys 0m0.061s 00:08:17.483 18:24:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.483 18:24:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.483 ************************************ 00:08:17.483 END TEST filesystem_btrfs 00:08:17.483 ************************************ 00:08:17.483 18:24:24 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:17.483 18:24:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.483 18:24:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.483 18:24:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.483 ************************************ 00:08:17.483 START TEST filesystem_xfs 00:08:17.483 ************************************ 00:08:17.483 18:24:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:17.483 18:24:24 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:17.483 18:24:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.483 18:24:24 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:17.483 18:24:24 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:17.483 18:24:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.483 18:24:24 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.483 18:24:24 -- common/autotest_common.sh@905 -- # local force 00:08:17.483 18:24:24 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:17.483 18:24:24 -- common/autotest_common.sh@910 -- # force=-f 00:08:17.483 18:24:24 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:17.483 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:17.483 = sectsz=512 attr=2, projid32bit=1 00:08:17.483 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:17.483 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:17.483 data = bsize=4096 blocks=130560, imaxpct=25 00:08:17.483 = sunit=0 swidth=0 blks 00:08:17.483 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:17.483 log =internal log bsize=4096 blocks=16384, version=2 00:08:17.483 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:17.483 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:18.415 Discarding blocks...Done. 00:08:18.415 18:24:25 -- common/autotest_common.sh@921 -- # return 0 00:08:18.415 18:24:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.941 18:24:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.941 18:24:27 -- target/filesystem.sh@25 -- # sync 00:08:20.941 18:24:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.941 18:24:27 -- target/filesystem.sh@27 -- # sync 00:08:20.941 18:24:27 -- target/filesystem.sh@29 -- # i=0 00:08:20.941 18:24:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.941 18:24:27 -- target/filesystem.sh@37 -- # kill -0 72472 00:08:20.941 18:24:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.941 18:24:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.941 18:24:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.941 18:24:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.941 00:08:20.941 real 0m3.121s 00:08:20.941 user 0m0.027s 00:08:20.941 sys 0m0.053s 00:08:20.941 18:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.941 18:24:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.941 ************************************ 00:08:20.941 END TEST filesystem_xfs 00:08:20.941 ************************************ 00:08:20.941 18:24:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:20.941 18:24:27 -- target/filesystem.sh@93 -- # sync 00:08:20.941 18:24:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.941 18:24:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.941 18:24:27 -- common/autotest_common.sh@1198 -- # local i=0 00:08:20.941 18:24:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:20.941 18:24:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.941 18:24:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:20.941 18:24:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.941 18:24:27 -- common/autotest_common.sh@1210 -- # return 0 00:08:20.941 18:24:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.941 18:24:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.941 18:24:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.941 18:24:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.941 18:24:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:20.941 18:24:27 -- target/filesystem.sh@101 -- # killprocess 72472 00:08:20.941 18:24:27 -- common/autotest_common.sh@926 -- # '[' -z 72472 ']' 00:08:20.941 18:24:27 -- common/autotest_common.sh@930 -- # kill -0 72472 00:08:20.941 18:24:27 -- common/autotest_common.sh@931 -- # uname 00:08:20.941 18:24:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.941 18:24:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72472 00:08:20.941 18:24:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:20.941 18:24:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:20.941 18:24:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72472' 00:08:20.941 killing process with pid 72472 00:08:20.941 18:24:28 -- common/autotest_common.sh@945 -- # kill 72472 00:08:20.941 18:24:28 -- common/autotest_common.sh@950 -- # wait 72472 00:08:21.200 18:24:28 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:21.200 00:08:21.200 real 0m9.241s 00:08:21.200 user 0m34.881s 00:08:21.200 sys 0m1.618s 00:08:21.200 18:24:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.200 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.200 ************************************ 00:08:21.200 END TEST nvmf_filesystem_no_in_capsule 00:08:21.200 ************************************ 00:08:21.200 18:24:28 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:21.200 18:24:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:21.200 18:24:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.200 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.200 ************************************ 00:08:21.200 START TEST nvmf_filesystem_in_capsule 00:08:21.200 ************************************ 00:08:21.200 18:24:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:21.200 18:24:28 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:21.200 18:24:28 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:21.200 18:24:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:21.200 18:24:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:21.200 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.200 18:24:28 -- nvmf/common.sh@469 -- # nvmfpid=72783 00:08:21.200 18:24:28 -- nvmf/common.sh@470 -- # waitforlisten 72783 00:08:21.200 18:24:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.200 18:24:28 -- common/autotest_common.sh@819 -- # '[' -z 72783 ']' 00:08:21.200 18:24:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.200 18:24:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.200 18:24:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.200 18:24:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.200 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.200 [2024-07-14 18:24:28.525053] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.200 [2024-07-14 18:24:28.525161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.459 [2024-07-14 18:24:28.664756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.459 [2024-07-14 18:24:28.761122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.459 [2024-07-14 18:24:28.761274] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.459 [2024-07-14 18:24:28.761287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.459 [2024-07-14 18:24:28.761297] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.459 [2024-07-14 18:24:28.761457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.459 [2024-07-14 18:24:28.761566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.459 [2024-07-14 18:24:28.762178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.459 [2024-07-14 18:24:28.762221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.395 18:24:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:22.395 18:24:29 -- common/autotest_common.sh@852 -- # return 0 00:08:22.395 18:24:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.395 18:24:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 18:24:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.395 18:24:29 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:22.395 18:24:29 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 [2024-07-14 18:24:29.497553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 [2024-07-14 18:24:29.680398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:22.395 18:24:29 -- common/autotest_common.sh@1359 -- # local bs 00:08:22.395 18:24:29 -- common/autotest_common.sh@1360 -- # local nb 00:08:22.395 18:24:29 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:22.395 18:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.395 18:24:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.395 18:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.395 18:24:29 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:22.395 { 00:08:22.395 "aliases": [ 00:08:22.395 "98ed0ece-9c15-4afc-9ab7-d75e1ab2ad8c" 00:08:22.395 ], 00:08:22.395 "assigned_rate_limits": { 00:08:22.395 "r_mbytes_per_sec": 0, 00:08:22.395 "rw_ios_per_sec": 0, 00:08:22.395 "rw_mbytes_per_sec": 0, 00:08:22.395 "w_mbytes_per_sec": 0 00:08:22.395 }, 00:08:22.395 "block_size": 512, 00:08:22.395 "claim_type": "exclusive_write", 00:08:22.395 "claimed": true, 00:08:22.395 "driver_specific": {}, 00:08:22.395 "memory_domains": [ 00:08:22.395 { 00:08:22.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.395 "dma_device_type": 2 00:08:22.395 } 00:08:22.395 ], 00:08:22.395 "name": "Malloc1", 00:08:22.395 "num_blocks": 1048576, 00:08:22.395 "product_name": "Malloc disk", 00:08:22.395 "supported_io_types": { 00:08:22.395 "abort": true, 00:08:22.395 "compare": false, 00:08:22.395 "compare_and_write": false, 00:08:22.395 "flush": true, 00:08:22.395 "nvme_admin": false, 00:08:22.395 "nvme_io": false, 00:08:22.395 "read": true, 00:08:22.395 "reset": true, 00:08:22.395 "unmap": true, 00:08:22.395 "write": true, 00:08:22.395 "write_zeroes": true 00:08:22.395 }, 00:08:22.395 "uuid": "98ed0ece-9c15-4afc-9ab7-d75e1ab2ad8c", 00:08:22.395 "zoned": false 00:08:22.395 } 00:08:22.395 ]' 00:08:22.395 18:24:29 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:22.395 18:24:29 -- common/autotest_common.sh@1362 -- # bs=512 00:08:22.395 18:24:29 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:22.395 18:24:29 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:22.395 18:24:29 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:22.395 18:24:29 -- common/autotest_common.sh@1367 -- # echo 512 00:08:22.654 18:24:29 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:22.654 18:24:29 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.654 18:24:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.654 18:24:29 -- common/autotest_common.sh@1177 -- # local i=0 00:08:22.654 18:24:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.654 18:24:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:22.654 18:24:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:25.186 18:24:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:25.186 18:24:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.186 18:24:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:25.186 18:24:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:25.186 18:24:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.186 18:24:32 -- common/autotest_common.sh@1187 -- # return 0 00:08:25.186 18:24:32 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:25.186 18:24:32 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:25.186 18:24:32 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:25.186 18:24:32 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:25.186 18:24:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:25.186 18:24:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:25.186 18:24:32 -- setup/common.sh@80 -- # echo 536870912 00:08:25.186 18:24:32 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:25.186 18:24:32 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:25.186 18:24:32 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:25.186 18:24:32 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:25.186 18:24:32 -- target/filesystem.sh@69 -- # partprobe 00:08:25.186 18:24:32 -- target/filesystem.sh@70 -- # sleep 1 00:08:25.752 18:24:33 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:25.753 18:24:33 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.753 18:24:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.753 18:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.753 18:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 START TEST filesystem_in_capsule_ext4 00:08:25.753 ************************************ 00:08:25.753 18:24:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.753 18:24:33 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.753 18:24:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.753 18:24:33 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.753 18:24:33 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:25.753 18:24:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.753 18:24:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.753 18:24:33 -- common/autotest_common.sh@905 -- # local force 00:08:25.753 18:24:33 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:25.753 18:24:33 -- common/autotest_common.sh@908 -- # force=-F 00:08:25.753 18:24:33 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.753 mke2fs 1.46.5 (30-Dec-2021) 00:08:26.014 Discarding device blocks: 0/522240 done 00:08:26.014 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:26.014 Filesystem UUID: 2a18d7c6-9e94-4bb0-9853-53ede3468a49 00:08:26.014 Superblock backups stored on blocks: 00:08:26.014 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:26.014 00:08:26.014 Allocating group tables: 0/64 done 00:08:26.014 Writing inode tables: 0/64 done 00:08:26.014 Creating journal (8192 blocks): done 00:08:26.014 Writing superblocks and filesystem accounting information: 0/64 done 00:08:26.014 00:08:26.014 18:24:33 -- common/autotest_common.sh@921 -- # return 0 00:08:26.014 18:24:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.014 18:24:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.014 18:24:33 -- target/filesystem.sh@25 -- # sync 00:08:26.276 18:24:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.276 18:24:33 -- target/filesystem.sh@27 -- # sync 00:08:26.276 18:24:33 -- target/filesystem.sh@29 -- # i=0 00:08:26.276 18:24:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.276 18:24:33 -- target/filesystem.sh@37 -- # kill -0 72783 00:08:26.276 18:24:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.276 18:24:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.276 18:24:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.276 18:24:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.276 00:08:26.276 real 0m0.330s 00:08:26.276 user 0m0.021s 00:08:26.276 sys 0m0.060s 00:08:26.276 18:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.276 18:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.276 ************************************ 00:08:26.276 END TEST filesystem_in_capsule_ext4 00:08:26.276 ************************************ 00:08:26.276 18:24:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:26.276 18:24:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.276 18:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.276 18:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.276 ************************************ 00:08:26.276 START TEST filesystem_in_capsule_btrfs 00:08:26.276 ************************************ 00:08:26.276 18:24:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:26.276 18:24:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:26.276 18:24:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.276 18:24:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:26.276 18:24:33 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:26.277 18:24:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:26.277 18:24:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:26.277 18:24:33 -- common/autotest_common.sh@905 -- # local force 00:08:26.277 18:24:33 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:26.277 18:24:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:26.277 18:24:33 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:26.534 btrfs-progs v6.6.2 00:08:26.534 See https://btrfs.readthedocs.io for more information. 00:08:26.534 00:08:26.534 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:26.534 NOTE: several default settings have changed in version 5.15, please make sure 00:08:26.534 this does not affect your deployments: 00:08:26.534 - DUP for metadata (-m dup) 00:08:26.534 - enabled no-holes (-O no-holes) 00:08:26.534 - enabled free-space-tree (-R free-space-tree) 00:08:26.534 00:08:26.534 Label: (null) 00:08:26.534 UUID: 767590a0-9249-45fe-8571-97b290f7cba6 00:08:26.534 Node size: 16384 00:08:26.534 Sector size: 4096 00:08:26.534 Filesystem size: 510.00MiB 00:08:26.534 Block group profiles: 00:08:26.534 Data: single 8.00MiB 00:08:26.534 Metadata: DUP 32.00MiB 00:08:26.534 System: DUP 8.00MiB 00:08:26.534 SSD detected: yes 00:08:26.534 Zoned device: no 00:08:26.534 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:26.534 Runtime features: free-space-tree 00:08:26.534 Checksum: crc32c 00:08:26.534 Number of devices: 1 00:08:26.534 Devices: 00:08:26.534 ID SIZE PATH 00:08:26.534 1 510.00MiB /dev/nvme0n1p1 00:08:26.534 00:08:26.534 18:24:33 -- common/autotest_common.sh@921 -- # return 0 00:08:26.534 18:24:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.534 18:24:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.534 18:24:33 -- target/filesystem.sh@25 -- # sync 00:08:26.534 18:24:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.534 18:24:33 -- target/filesystem.sh@27 -- # sync 00:08:26.534 18:24:33 -- target/filesystem.sh@29 -- # i=0 00:08:26.534 18:24:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.534 18:24:33 -- target/filesystem.sh@37 -- # kill -0 72783 00:08:26.534 18:24:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.534 18:24:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.534 18:24:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.534 18:24:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.534 00:08:26.534 real 0m0.274s 00:08:26.534 user 0m0.026s 00:08:26.534 sys 0m0.070s 00:08:26.534 18:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.534 18:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.534 ************************************ 00:08:26.534 END TEST filesystem_in_capsule_btrfs 00:08:26.534 ************************************ 00:08:26.534 18:24:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:26.534 18:24:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.534 18:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.534 18:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.534 ************************************ 00:08:26.534 START TEST filesystem_in_capsule_xfs 00:08:26.534 ************************************ 00:08:26.534 18:24:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:26.534 18:24:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:26.534 18:24:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.534 18:24:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:26.534 18:24:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:26.534 18:24:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:26.534 18:24:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:26.534 18:24:33 -- common/autotest_common.sh@905 -- # local force 00:08:26.534 18:24:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:26.534 18:24:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:26.534 18:24:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:26.534 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:26.534 = sectsz=512 attr=2, projid32bit=1 00:08:26.534 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:26.534 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:26.534 data = bsize=4096 blocks=130560, imaxpct=25 00:08:26.534 = sunit=0 swidth=0 blks 00:08:26.534 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:26.534 log =internal log bsize=4096 blocks=16384, version=2 00:08:26.534 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:26.534 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.466 Discarding blocks...Done. 00:08:27.466 18:24:34 -- common/autotest_common.sh@921 -- # return 0 00:08:27.466 18:24:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.364 18:24:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.364 18:24:36 -- target/filesystem.sh@25 -- # sync 00:08:29.364 18:24:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.364 18:24:36 -- target/filesystem.sh@27 -- # sync 00:08:29.364 18:24:36 -- target/filesystem.sh@29 -- # i=0 00:08:29.364 18:24:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.364 18:24:36 -- target/filesystem.sh@37 -- # kill -0 72783 00:08:29.364 18:24:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.365 18:24:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.365 18:24:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.365 18:24:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.365 00:08:29.365 real 0m2.607s 00:08:29.365 user 0m0.021s 00:08:29.365 sys 0m0.058s 00:08:29.365 18:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.365 18:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:29.365 ************************************ 00:08:29.365 END TEST filesystem_in_capsule_xfs 00:08:29.365 ************************************ 00:08:29.365 18:24:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:29.365 18:24:36 -- target/filesystem.sh@93 -- # sync 00:08:29.365 18:24:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.365 18:24:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.365 18:24:36 -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.365 18:24:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:29.365 18:24:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.365 18:24:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.365 18:24:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:29.365 18:24:36 -- common/autotest_common.sh@1210 -- # return 0 00:08:29.365 18:24:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.365 18:24:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.365 18:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:29.365 18:24:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.365 18:24:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:29.365 18:24:36 -- target/filesystem.sh@101 -- # killprocess 72783 00:08:29.365 18:24:36 -- common/autotest_common.sh@926 -- # '[' -z 72783 ']' 00:08:29.365 18:24:36 -- common/autotest_common.sh@930 -- # kill -0 72783 00:08:29.365 18:24:36 -- common/autotest_common.sh@931 -- # uname 00:08:29.365 18:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:29.365 18:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72783 00:08:29.365 18:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:29.365 18:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:29.365 18:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72783' 00:08:29.365 killing process with pid 72783 00:08:29.365 18:24:36 -- common/autotest_common.sh@945 -- # kill 72783 00:08:29.365 18:24:36 -- common/autotest_common.sh@950 -- # wait 72783 00:08:29.623 18:24:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.623 00:08:29.623 real 0m8.577s 00:08:29.623 user 0m32.266s 00:08:29.623 sys 0m1.554s 00:08:29.623 18:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.623 ************************************ 00:08:29.623 END TEST nvmf_filesystem_in_capsule 00:08:29.623 18:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.623 ************************************ 00:08:29.881 18:24:37 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:29.881 18:24:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.881 18:24:37 -- nvmf/common.sh@116 -- # sync 00:08:29.881 18:24:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:29.881 18:24:37 -- nvmf/common.sh@119 -- # set +e 00:08:29.881 18:24:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.881 18:24:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:29.881 rmmod nvme_tcp 00:08:29.881 rmmod nvme_fabrics 00:08:29.881 rmmod nvme_keyring 00:08:29.881 18:24:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.881 18:24:37 -- nvmf/common.sh@123 -- # set -e 00:08:29.881 18:24:37 -- nvmf/common.sh@124 -- # return 0 00:08:29.881 18:24:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:29.881 18:24:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.881 18:24:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:29.881 18:24:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:29.881 18:24:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.881 18:24:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:29.881 18:24:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.881 18:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.881 18:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.881 18:24:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:29.881 00:08:29.881 real 0m18.618s 00:08:29.881 user 1m7.386s 00:08:29.881 sys 0m3.547s 00:08:29.881 18:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.881 18:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.881 ************************************ 00:08:29.881 END TEST nvmf_filesystem 00:08:29.881 ************************************ 00:08:29.882 18:24:37 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:29.882 18:24:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.882 18:24:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.882 18:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.882 ************************************ 00:08:29.882 START TEST nvmf_discovery 00:08:29.882 ************************************ 00:08:29.882 18:24:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:30.140 * Looking for test storage... 00:08:30.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.140 18:24:37 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.140 18:24:37 -- nvmf/common.sh@7 -- # uname -s 00:08:30.140 18:24:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.140 18:24:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.140 18:24:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.140 18:24:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.140 18:24:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.140 18:24:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.140 18:24:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.140 18:24:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.140 18:24:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.140 18:24:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:08:30.140 18:24:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:08:30.140 18:24:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.140 18:24:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.140 18:24:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.140 18:24:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.140 18:24:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.140 18:24:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.140 18:24:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.140 18:24:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.140 18:24:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.140 18:24:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.140 18:24:37 -- paths/export.sh@5 -- # export PATH 00:08:30.140 18:24:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.140 18:24:37 -- nvmf/common.sh@46 -- # : 0 00:08:30.140 18:24:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.140 18:24:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.140 18:24:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.140 18:24:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.140 18:24:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.140 18:24:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.140 18:24:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.140 18:24:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.140 18:24:37 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:30.140 18:24:37 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:30.140 18:24:37 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:30.140 18:24:37 -- target/discovery.sh@15 -- # hash nvme 00:08:30.140 18:24:37 -- target/discovery.sh@20 -- # nvmftestinit 00:08:30.140 18:24:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:30.140 18:24:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.140 18:24:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.140 18:24:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.140 18:24:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.140 18:24:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.140 18:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.140 18:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.140 18:24:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:30.140 18:24:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:30.140 18:24:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.140 18:24:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.140 18:24:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:30.140 18:24:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:30.140 18:24:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.140 18:24:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.140 18:24:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.140 18:24:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.140 18:24:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.140 18:24:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.140 18:24:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.140 18:24:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.140 18:24:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:30.140 18:24:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:30.140 Cannot find device "nvmf_tgt_br" 00:08:30.140 18:24:37 -- nvmf/common.sh@154 -- # true 00:08:30.140 18:24:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.140 Cannot find device "nvmf_tgt_br2" 00:08:30.140 18:24:37 -- nvmf/common.sh@155 -- # true 00:08:30.140 18:24:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:30.140 18:24:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:30.140 Cannot find device "nvmf_tgt_br" 00:08:30.140 18:24:37 -- nvmf/common.sh@157 -- # true 00:08:30.140 18:24:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:30.140 Cannot find device "nvmf_tgt_br2" 00:08:30.140 18:24:37 -- nvmf/common.sh@158 -- # true 00:08:30.140 18:24:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:30.140 18:24:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:30.141 18:24:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.141 18:24:37 -- nvmf/common.sh@161 -- # true 00:08:30.141 18:24:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.141 18:24:37 -- nvmf/common.sh@162 -- # true 00:08:30.141 18:24:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.141 18:24:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.141 18:24:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.141 18:24:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.141 18:24:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.141 18:24:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.398 18:24:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.398 18:24:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.398 18:24:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.398 18:24:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:30.398 18:24:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:30.398 18:24:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:30.398 18:24:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:30.398 18:24:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.398 18:24:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.398 18:24:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.398 18:24:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:30.398 18:24:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:30.398 18:24:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.398 18:24:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.398 18:24:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.398 18:24:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.399 18:24:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.399 18:24:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:30.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:08:30.399 00:08:30.399 --- 10.0.0.2 ping statistics --- 00:08:30.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.399 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:30.399 18:24:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:30.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:30.399 00:08:30.399 --- 10.0.0.3 ping statistics --- 00:08:30.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.399 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:30.399 18:24:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:30.399 00:08:30.399 --- 10.0.0.1 ping statistics --- 00:08:30.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.399 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:30.399 18:24:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.399 18:24:37 -- nvmf/common.sh@421 -- # return 0 00:08:30.399 18:24:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:30.399 18:24:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.399 18:24:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:30.399 18:24:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:30.399 18:24:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.399 18:24:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:30.399 18:24:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:30.399 18:24:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:30.399 18:24:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:30.399 18:24:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:30.399 18:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.399 18:24:37 -- nvmf/common.sh@469 -- # nvmfpid=73228 00:08:30.399 18:24:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.399 18:24:37 -- nvmf/common.sh@470 -- # waitforlisten 73228 00:08:30.399 18:24:37 -- common/autotest_common.sh@819 -- # '[' -z 73228 ']' 00:08:30.399 18:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.399 18:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:30.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.399 18:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.399 18:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:30.399 18:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.399 [2024-07-14 18:24:37.785741] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:30.399 [2024-07-14 18:24:37.785836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.657 [2024-07-14 18:24:37.924887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.657 [2024-07-14 18:24:38.020984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.657 [2024-07-14 18:24:38.021361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.657 [2024-07-14 18:24:38.021514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.657 [2024-07-14 18:24:38.021650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.657 [2024-07-14 18:24:38.021906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.658 [2024-07-14 18:24:38.022040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.658 [2024-07-14 18:24:38.022620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.658 [2024-07-14 18:24:38.022626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.592 18:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.592 18:24:38 -- common/autotest_common.sh@852 -- # return 0 00:08:31.592 18:24:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:31.593 18:24:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.593 18:24:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 [2024-07-14 18:24:38.818987] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@26 -- # seq 1 4 00:08:31.593 18:24:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.593 18:24:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 Null1 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 [2024-07-14 18:24:38.896584] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.593 18:24:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 Null2 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.593 18:24:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 Null3 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.593 18:24:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 Null4 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:31.593 18:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:39 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.593 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.593 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.593 18:24:39 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.593 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.593 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 4420 00:08:31.852 00:08:31.852 Discovery Log Number of Records 6, Generation counter 6 00:08:31.852 =====Discovery Log Entry 0====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: current discovery subsystem 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4420 00:08:31.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: explicit discovery connections, duplicate discovery information 00:08:31.852 sectype: none 00:08:31.852 =====Discovery Log Entry 1====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: nvme subsystem 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4420 00:08:31.852 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: none 00:08:31.852 sectype: none 00:08:31.852 =====Discovery Log Entry 2====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: nvme subsystem 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4420 00:08:31.852 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: none 00:08:31.852 sectype: none 00:08:31.852 =====Discovery Log Entry 3====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: nvme subsystem 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4420 00:08:31.852 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: none 00:08:31.852 sectype: none 00:08:31.852 =====Discovery Log Entry 4====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: nvme subsystem 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4420 00:08:31.852 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: none 00:08:31.852 sectype: none 00:08:31.852 =====Discovery Log Entry 5====== 00:08:31.852 trtype: tcp 00:08:31.852 adrfam: ipv4 00:08:31.852 subtype: discovery subsystem referral 00:08:31.852 treq: not required 00:08:31.852 portid: 0 00:08:31.852 trsvcid: 4430 00:08:31.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.852 traddr: 10.0.0.2 00:08:31.852 eflags: none 00:08:31.852 sectype: none 00:08:31.852 Perform nvmf subsystem discovery via RPC 00:08:31.852 18:24:39 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:31.852 18:24:39 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 [2024-07-14 18:24:39.088616] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:31.852 [ 00:08:31.852 { 00:08:31.852 "allow_any_host": true, 00:08:31.852 "hosts": [], 00:08:31.852 "listen_addresses": [ 00:08:31.852 { 00:08:31.852 "adrfam": "IPv4", 00:08:31.852 "traddr": "10.0.0.2", 00:08:31.852 "transport": "TCP", 00:08:31.852 "trsvcid": "4420", 00:08:31.852 "trtype": "TCP" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:31.852 "subtype": "Discovery" 00:08:31.852 }, 00:08:31.852 { 00:08:31.852 "allow_any_host": true, 00:08:31.852 "hosts": [], 00:08:31.852 "listen_addresses": [ 00:08:31.852 { 00:08:31.852 "adrfam": "IPv4", 00:08:31.852 "traddr": "10.0.0.2", 00:08:31.852 "transport": "TCP", 00:08:31.852 "trsvcid": "4420", 00:08:31.852 "trtype": "TCP" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "max_cntlid": 65519, 00:08:31.852 "max_namespaces": 32, 00:08:31.852 "min_cntlid": 1, 00:08:31.852 "model_number": "SPDK bdev Controller", 00:08:31.852 "namespaces": [ 00:08:31.852 { 00:08:31.852 "bdev_name": "Null1", 00:08:31.852 "name": "Null1", 00:08:31.852 "nguid": "11B70170E0344F178A0FA8745DE5BEE8", 00:08:31.852 "nsid": 1, 00:08:31.852 "uuid": "11b70170-e034-4f17-8a0f-a8745de5bee8" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.852 "serial_number": "SPDK00000000000001", 00:08:31.852 "subtype": "NVMe" 00:08:31.852 }, 00:08:31.852 { 00:08:31.852 "allow_any_host": true, 00:08:31.852 "hosts": [], 00:08:31.852 "listen_addresses": [ 00:08:31.852 { 00:08:31.852 "adrfam": "IPv4", 00:08:31.852 "traddr": "10.0.0.2", 00:08:31.852 "transport": "TCP", 00:08:31.852 "trsvcid": "4420", 00:08:31.852 "trtype": "TCP" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "max_cntlid": 65519, 00:08:31.852 "max_namespaces": 32, 00:08:31.852 "min_cntlid": 1, 00:08:31.852 "model_number": "SPDK bdev Controller", 00:08:31.852 "namespaces": [ 00:08:31.852 { 00:08:31.852 "bdev_name": "Null2", 00:08:31.852 "name": "Null2", 00:08:31.852 "nguid": "B414E1527E4F43A8B64EA49839DBCE14", 00:08:31.852 "nsid": 1, 00:08:31.852 "uuid": "b414e152-7e4f-43a8-b64e-a49839dbce14" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.852 "serial_number": "SPDK00000000000002", 00:08:31.852 "subtype": "NVMe" 00:08:31.852 }, 00:08:31.852 { 00:08:31.852 "allow_any_host": true, 00:08:31.852 "hosts": [], 00:08:31.852 "listen_addresses": [ 00:08:31.852 { 00:08:31.852 "adrfam": "IPv4", 00:08:31.852 "traddr": "10.0.0.2", 00:08:31.852 "transport": "TCP", 00:08:31.852 "trsvcid": "4420", 00:08:31.852 "trtype": "TCP" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "max_cntlid": 65519, 00:08:31.852 "max_namespaces": 32, 00:08:31.852 "min_cntlid": 1, 00:08:31.852 "model_number": "SPDK bdev Controller", 00:08:31.852 "namespaces": [ 00:08:31.852 { 00:08:31.852 "bdev_name": "Null3", 00:08:31.852 "name": "Null3", 00:08:31.852 "nguid": "D0384FBBC58944138ACD77D1A5A1B2E4", 00:08:31.852 "nsid": 1, 00:08:31.852 "uuid": "d0384fbb-c589-4413-8acd-77d1a5a1b2e4" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:31.852 "serial_number": "SPDK00000000000003", 00:08:31.852 "subtype": "NVMe" 00:08:31.852 }, 00:08:31.852 { 00:08:31.852 "allow_any_host": true, 00:08:31.852 "hosts": [], 00:08:31.852 "listen_addresses": [ 00:08:31.852 { 00:08:31.852 "adrfam": "IPv4", 00:08:31.852 "traddr": "10.0.0.2", 00:08:31.852 "transport": "TCP", 00:08:31.852 "trsvcid": "4420", 00:08:31.852 "trtype": "TCP" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "max_cntlid": 65519, 00:08:31.852 "max_namespaces": 32, 00:08:31.852 "min_cntlid": 1, 00:08:31.852 "model_number": "SPDK bdev Controller", 00:08:31.852 "namespaces": [ 00:08:31.852 { 00:08:31.852 "bdev_name": "Null4", 00:08:31.852 "name": "Null4", 00:08:31.852 "nguid": "D1535414C143404FA5A43E96B6909785", 00:08:31.852 "nsid": 1, 00:08:31.852 "uuid": "d1535414-c143-404f-a5a4-3e96b6909785" 00:08:31.852 } 00:08:31.852 ], 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:31.852 "serial_number": "SPDK00000000000004", 00:08:31.852 "subtype": "NVMe" 00:08:31.852 } 00:08:31.852 ] 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@42 -- # seq 1 4 00:08:31.852 18:24:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.852 18:24:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.852 18:24:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.852 18:24:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.852 18:24:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.852 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.852 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.852 18:24:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:31.852 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.853 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.853 18:24:39 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.853 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.853 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.853 18:24:39 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:31.853 18:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.853 18:24:39 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:31.853 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 18:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.853 18:24:39 -- target/discovery.sh@49 -- # check_bdevs= 00:08:31.853 18:24:39 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:31.853 18:24:39 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:31.853 18:24:39 -- target/discovery.sh@57 -- # nvmftestfini 00:08:31.853 18:24:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:31.853 18:24:39 -- nvmf/common.sh@116 -- # sync 00:08:31.853 18:24:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:31.853 18:24:39 -- nvmf/common.sh@119 -- # set +e 00:08:31.853 18:24:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:31.853 18:24:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:31.853 rmmod nvme_tcp 00:08:31.853 rmmod nvme_fabrics 00:08:32.111 rmmod nvme_keyring 00:08:32.111 18:24:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:32.111 18:24:39 -- nvmf/common.sh@123 -- # set -e 00:08:32.111 18:24:39 -- nvmf/common.sh@124 -- # return 0 00:08:32.111 18:24:39 -- nvmf/common.sh@477 -- # '[' -n 73228 ']' 00:08:32.111 18:24:39 -- nvmf/common.sh@478 -- # killprocess 73228 00:08:32.111 18:24:39 -- common/autotest_common.sh@926 -- # '[' -z 73228 ']' 00:08:32.111 18:24:39 -- common/autotest_common.sh@930 -- # kill -0 73228 00:08:32.111 18:24:39 -- common/autotest_common.sh@931 -- # uname 00:08:32.111 18:24:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.111 18:24:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73228 00:08:32.111 18:24:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.111 18:24:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.111 18:24:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73228' 00:08:32.111 killing process with pid 73228 00:08:32.111 18:24:39 -- common/autotest_common.sh@945 -- # kill 73228 00:08:32.111 [2024-07-14 18:24:39.319868] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:32.111 18:24:39 -- common/autotest_common.sh@950 -- # wait 73228 00:08:32.111 18:24:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:32.111 18:24:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:32.111 18:24:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:32.111 18:24:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.111 18:24:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:32.111 18:24:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.111 18:24:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.111 18:24:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.370 18:24:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:32.370 00:08:32.370 real 0m2.317s 00:08:32.370 user 0m6.277s 00:08:32.370 sys 0m0.614s 00:08:32.370 18:24:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.370 ************************************ 00:08:32.370 END TEST nvmf_discovery 00:08:32.370 ************************************ 00:08:32.370 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.370 18:24:39 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.370 18:24:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:32.370 18:24:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.370 18:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.370 ************************************ 00:08:32.370 START TEST nvmf_referrals 00:08:32.370 ************************************ 00:08:32.370 18:24:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.370 * Looking for test storage... 00:08:32.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.370 18:24:39 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.370 18:24:39 -- nvmf/common.sh@7 -- # uname -s 00:08:32.370 18:24:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.370 18:24:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.370 18:24:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.370 18:24:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.370 18:24:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.370 18:24:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.370 18:24:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.370 18:24:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.370 18:24:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.370 18:24:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:08:32.370 18:24:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:08:32.370 18:24:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.370 18:24:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.370 18:24:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.370 18:24:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.370 18:24:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.370 18:24:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.370 18:24:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.370 18:24:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.370 18:24:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.370 18:24:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.370 18:24:39 -- paths/export.sh@5 -- # export PATH 00:08:32.370 18:24:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.370 18:24:39 -- nvmf/common.sh@46 -- # : 0 00:08:32.370 18:24:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.370 18:24:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.370 18:24:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.370 18:24:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.370 18:24:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.370 18:24:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.370 18:24:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.370 18:24:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.370 18:24:39 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:32.370 18:24:39 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:32.370 18:24:39 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:32.370 18:24:39 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:32.370 18:24:39 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:32.370 18:24:39 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:32.370 18:24:39 -- target/referrals.sh@37 -- # nvmftestinit 00:08:32.370 18:24:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:32.370 18:24:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.370 18:24:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.370 18:24:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.370 18:24:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.370 18:24:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.370 18:24:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.370 18:24:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.370 18:24:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:32.370 18:24:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:32.370 18:24:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.370 18:24:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.370 18:24:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:32.370 18:24:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:32.370 18:24:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.370 18:24:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.370 18:24:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.370 18:24:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.371 18:24:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.371 18:24:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.371 18:24:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.371 18:24:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.371 18:24:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:32.371 18:24:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:32.371 Cannot find device "nvmf_tgt_br" 00:08:32.371 18:24:39 -- nvmf/common.sh@154 -- # true 00:08:32.371 18:24:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.371 Cannot find device "nvmf_tgt_br2" 00:08:32.371 18:24:39 -- nvmf/common.sh@155 -- # true 00:08:32.371 18:24:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:32.371 18:24:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:32.371 Cannot find device "nvmf_tgt_br" 00:08:32.371 18:24:39 -- nvmf/common.sh@157 -- # true 00:08:32.371 18:24:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:32.371 Cannot find device "nvmf_tgt_br2" 00:08:32.371 18:24:39 -- nvmf/common.sh@158 -- # true 00:08:32.371 18:24:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:32.628 18:24:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:32.628 18:24:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.628 18:24:39 -- nvmf/common.sh@161 -- # true 00:08:32.628 18:24:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.628 18:24:39 -- nvmf/common.sh@162 -- # true 00:08:32.628 18:24:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.628 18:24:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.628 18:24:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.628 18:24:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.628 18:24:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.628 18:24:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.628 18:24:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.628 18:24:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:32.628 18:24:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:32.628 18:24:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:32.628 18:24:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:32.628 18:24:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:32.628 18:24:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:32.628 18:24:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.628 18:24:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.628 18:24:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.628 18:24:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:32.628 18:24:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:32.628 18:24:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.628 18:24:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.628 18:24:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.628 18:24:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.628 18:24:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.628 18:24:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:32.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:32.628 00:08:32.628 --- 10.0.0.2 ping statistics --- 00:08:32.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.628 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:32.628 18:24:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:32.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:32.628 00:08:32.628 --- 10.0.0.3 ping statistics --- 00:08:32.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.628 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:32.628 18:24:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:32.885 00:08:32.885 --- 10.0.0.1 ping statistics --- 00:08:32.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.885 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:32.885 18:24:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.885 18:24:40 -- nvmf/common.sh@421 -- # return 0 00:08:32.885 18:24:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.885 18:24:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.885 18:24:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.885 18:24:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.885 18:24:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.885 18:24:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.885 18:24:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.885 18:24:40 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:32.885 18:24:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.885 18:24:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.885 18:24:40 -- common/autotest_common.sh@10 -- # set +x 00:08:32.885 18:24:40 -- nvmf/common.sh@469 -- # nvmfpid=73462 00:08:32.885 18:24:40 -- nvmf/common.sh@470 -- # waitforlisten 73462 00:08:32.885 18:24:40 -- common/autotest_common.sh@819 -- # '[' -z 73462 ']' 00:08:32.885 18:24:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.885 18:24:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.885 18:24:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.885 18:24:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.885 18:24:40 -- common/autotest_common.sh@10 -- # set +x 00:08:32.885 18:24:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.885 [2024-07-14 18:24:40.121414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:32.885 [2024-07-14 18:24:40.121541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.885 [2024-07-14 18:24:40.267976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.142 [2024-07-14 18:24:40.362904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.142 [2024-07-14 18:24:40.363050] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.142 [2024-07-14 18:24:40.363064] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.142 [2024-07-14 18:24:40.363073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.142 [2024-07-14 18:24:40.363725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.142 [2024-07-14 18:24:40.363798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.142 [2024-07-14 18:24:40.363884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.142 [2024-07-14 18:24:40.363887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.708 18:24:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.708 18:24:41 -- common/autotest_common.sh@852 -- # return 0 00:08:33.708 18:24:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.708 18:24:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:33.708 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.708 18:24:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.708 18:24:41 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.708 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.708 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.708 [2024-07-14 18:24:41.109651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.708 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.708 18:24:41 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:33.708 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.708 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 [2024-07-14 18:24:41.130711] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.965 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.965 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.965 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.965 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.965 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.965 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.965 18:24:41 -- target/referrals.sh@48 -- # jq length 00:08:33.965 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.965 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:33.965 18:24:41 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:33.965 18:24:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.965 18:24:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.965 18:24:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.965 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.965 18:24:41 -- target/referrals.sh@21 -- # sort 00:08:33.965 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.965 18:24:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.966 18:24:41 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.966 18:24:41 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.966 18:24:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.966 18:24:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.966 18:24:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.966 18:24:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.966 18:24:41 -- target/referrals.sh@26 -- # sort 00:08:33.966 18:24:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.966 18:24:41 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.966 18:24:41 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.966 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.966 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.966 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.966 18:24:41 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.966 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.966 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:34.223 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.223 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.223 18:24:41 -- target/referrals.sh@56 -- # jq length 00:08:34.223 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.223 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:34.223 18:24:41 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:34.223 18:24:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.223 18:24:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # sort 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # echo 00:08:34.223 18:24:41 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:34.223 18:24:41 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:34.223 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.223 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.223 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.223 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:34.223 18:24:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.223 18:24:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.223 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.223 18:24:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.223 18:24:41 -- target/referrals.sh@21 -- # sort 00:08:34.223 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:34.223 18:24:41 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.223 18:24:41 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:34.223 18:24:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.223 18:24:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.223 18:24:41 -- target/referrals.sh@26 -- # sort 00:08:34.481 18:24:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:34.481 18:24:41 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.481 18:24:41 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:34.481 18:24:41 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:34.481 18:24:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.481 18:24:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.481 18:24:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.481 18:24:41 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.481 18:24:41 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:34.481 18:24:41 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.481 18:24:41 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.481 18:24:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.481 18:24:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.481 18:24:41 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.481 18:24:41 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.481 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.481 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.481 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.481 18:24:41 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:34.481 18:24:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.481 18:24:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.481 18:24:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.481 18:24:41 -- target/referrals.sh@21 -- # sort 00:08:34.481 18:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.481 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:08:34.481 18:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.481 18:24:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:34.481 18:24:41 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.481 18:24:41 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:34.481 18:24:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.481 18:24:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.481 18:24:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.481 18:24:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.481 18:24:41 -- target/referrals.sh@26 -- # sort 00:08:34.740 18:24:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.740 18:24:41 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.740 18:24:41 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.740 18:24:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.740 18:24:41 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.740 18:24:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.740 18:24:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.740 18:24:41 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.740 18:24:42 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.740 18:24:42 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.740 18:24:42 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.740 18:24:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.740 18:24:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.740 18:24:42 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.740 18:24:42 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.740 18:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.740 18:24:42 -- common/autotest_common.sh@10 -- # set +x 00:08:34.740 18:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.740 18:24:42 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.740 18:24:42 -- target/referrals.sh@82 -- # jq length 00:08:34.740 18:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.740 18:24:42 -- common/autotest_common.sh@10 -- # set +x 00:08:34.740 18:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.740 18:24:42 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.740 18:24:42 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.740 18:24:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.740 18:24:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.740 18:24:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.740 18:24:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.740 18:24:42 -- target/referrals.sh@26 -- # sort 00:08:34.999 18:24:42 -- target/referrals.sh@26 -- # echo 00:08:34.999 18:24:42 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.999 18:24:42 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.999 18:24:42 -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.999 18:24:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.999 18:24:42 -- nvmf/common.sh@116 -- # sync 00:08:34.999 18:24:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.999 18:24:42 -- nvmf/common.sh@119 -- # set +e 00:08:34.999 18:24:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.999 18:24:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.999 rmmod nvme_tcp 00:08:34.999 rmmod nvme_fabrics 00:08:34.999 rmmod nvme_keyring 00:08:34.999 18:24:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.999 18:24:42 -- nvmf/common.sh@123 -- # set -e 00:08:34.999 18:24:42 -- nvmf/common.sh@124 -- # return 0 00:08:34.999 18:24:42 -- nvmf/common.sh@477 -- # '[' -n 73462 ']' 00:08:34.999 18:24:42 -- nvmf/common.sh@478 -- # killprocess 73462 00:08:34.999 18:24:42 -- common/autotest_common.sh@926 -- # '[' -z 73462 ']' 00:08:34.999 18:24:42 -- common/autotest_common.sh@930 -- # kill -0 73462 00:08:34.999 18:24:42 -- common/autotest_common.sh@931 -- # uname 00:08:34.999 18:24:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:34.999 18:24:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73462 00:08:34.999 18:24:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:34.999 18:24:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:34.999 killing process with pid 73462 00:08:34.999 18:24:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73462' 00:08:34.999 18:24:42 -- common/autotest_common.sh@945 -- # kill 73462 00:08:34.999 18:24:42 -- common/autotest_common.sh@950 -- # wait 73462 00:08:35.256 18:24:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:35.256 18:24:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:35.256 18:24:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:35.256 18:24:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.256 18:24:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:35.256 18:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.256 18:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.256 18:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.256 18:24:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:35.256 00:08:35.256 real 0m2.952s 00:08:35.256 user 0m9.535s 00:08:35.256 sys 0m0.826s 00:08:35.256 18:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.256 18:24:42 -- common/autotest_common.sh@10 -- # set +x 00:08:35.256 ************************************ 00:08:35.256 END TEST nvmf_referrals 00:08:35.256 ************************************ 00:08:35.256 18:24:42 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:35.256 18:24:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:35.256 18:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.256 18:24:42 -- common/autotest_common.sh@10 -- # set +x 00:08:35.256 ************************************ 00:08:35.256 START TEST nvmf_connect_disconnect 00:08:35.256 ************************************ 00:08:35.256 18:24:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:35.256 * Looking for test storage... 00:08:35.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.514 18:24:42 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.514 18:24:42 -- nvmf/common.sh@7 -- # uname -s 00:08:35.514 18:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.514 18:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.514 18:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.514 18:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.514 18:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.514 18:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.514 18:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.514 18:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.514 18:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.514 18:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:08:35.514 18:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:08:35.514 18:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.514 18:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.514 18:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.514 18:24:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.514 18:24:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.514 18:24:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.514 18:24:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.514 18:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.514 18:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.514 18:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.514 18:24:42 -- paths/export.sh@5 -- # export PATH 00:08:35.514 18:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.514 18:24:42 -- nvmf/common.sh@46 -- # : 0 00:08:35.514 18:24:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:35.514 18:24:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:35.514 18:24:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:35.514 18:24:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.514 18:24:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.514 18:24:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:35.514 18:24:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:35.514 18:24:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:35.514 18:24:42 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.514 18:24:42 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.514 18:24:42 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:35.514 18:24:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:35.514 18:24:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.514 18:24:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:35.514 18:24:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:35.514 18:24:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:35.514 18:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.514 18:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.514 18:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.514 18:24:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:35.514 18:24:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:35.514 18:24:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.514 18:24:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.514 18:24:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.514 18:24:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:35.514 18:24:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.514 18:24:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.514 18:24:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.514 18:24:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.514 18:24:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.514 18:24:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.514 18:24:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.514 18:24:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.514 18:24:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:35.514 18:24:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:35.514 Cannot find device "nvmf_tgt_br" 00:08:35.514 18:24:42 -- nvmf/common.sh@154 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.514 Cannot find device "nvmf_tgt_br2" 00:08:35.514 18:24:42 -- nvmf/common.sh@155 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:35.514 18:24:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:35.514 Cannot find device "nvmf_tgt_br" 00:08:35.514 18:24:42 -- nvmf/common.sh@157 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:35.514 Cannot find device "nvmf_tgt_br2" 00:08:35.514 18:24:42 -- nvmf/common.sh@158 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:35.514 18:24:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:35.514 18:24:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.514 18:24:42 -- nvmf/common.sh@161 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.514 18:24:42 -- nvmf/common.sh@162 -- # true 00:08:35.514 18:24:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.514 18:24:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.514 18:24:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.514 18:24:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.514 18:24:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.514 18:24:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.514 18:24:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.514 18:24:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.772 18:24:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.773 18:24:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:35.773 18:24:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:35.773 18:24:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:35.773 18:24:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:35.773 18:24:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.773 18:24:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.773 18:24:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.773 18:24:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:35.773 18:24:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:35.773 18:24:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.773 18:24:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.773 18:24:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.773 18:24:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.773 18:24:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.773 18:24:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:35.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:35.773 00:08:35.773 --- 10.0.0.2 ping statistics --- 00:08:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.773 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:35.773 18:24:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:35.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:35.773 00:08:35.773 --- 10.0.0.3 ping statistics --- 00:08:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.773 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:35.773 18:24:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:08:35.773 00:08:35.773 --- 10.0.0.1 ping statistics --- 00:08:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.773 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:35.773 18:24:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.773 18:24:43 -- nvmf/common.sh@421 -- # return 0 00:08:35.773 18:24:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.773 18:24:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.773 18:24:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:35.773 18:24:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:35.773 18:24:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.773 18:24:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:35.773 18:24:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:35.773 18:24:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:35.773 18:24:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:35.773 18:24:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.773 18:24:43 -- common/autotest_common.sh@10 -- # set +x 00:08:35.773 18:24:43 -- nvmf/common.sh@469 -- # nvmfpid=73754 00:08:35.773 18:24:43 -- nvmf/common.sh@470 -- # waitforlisten 73754 00:08:35.773 18:24:43 -- common/autotest_common.sh@819 -- # '[' -z 73754 ']' 00:08:35.773 18:24:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.773 18:24:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:35.773 18:24:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.773 18:24:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.773 18:24:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:35.773 18:24:43 -- common/autotest_common.sh@10 -- # set +x 00:08:35.773 [2024-07-14 18:24:43.103122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.773 [2024-07-14 18:24:43.103186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.030 [2024-07-14 18:24:43.237280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.030 [2024-07-14 18:24:43.308025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.030 [2024-07-14 18:24:43.308197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.030 [2024-07-14 18:24:43.308209] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.030 [2024-07-14 18:24:43.308218] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.030 [2024-07-14 18:24:43.308340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.030 [2024-07-14 18:24:43.309385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.030 [2024-07-14 18:24:43.309565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.030 [2024-07-14 18:24:43.309567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.962 18:24:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:36.962 18:24:44 -- common/autotest_common.sh@852 -- # return 0 00:08:36.962 18:24:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:36.962 18:24:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 18:24:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:36.962 18:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 [2024-07-14 18:24:44.156715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.962 18:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:36.962 18:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 18:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.962 18:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 18:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.962 18:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 18:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.962 18:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.962 18:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 [2024-07-14 18:24:44.224154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.962 18:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:36.962 18:24:44 -- target/connect_disconnect.sh@34 -- # set +x 00:08:39.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.437 18:28:27 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:20.437 18:28:27 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:20.437 18:28:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:20.437 18:28:27 -- nvmf/common.sh@116 -- # sync 00:12:20.437 18:28:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:20.437 18:28:27 -- nvmf/common.sh@119 -- # set +e 00:12:20.437 18:28:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:20.437 18:28:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:20.437 rmmod nvme_tcp 00:12:20.437 rmmod nvme_fabrics 00:12:20.437 rmmod nvme_keyring 00:12:20.437 18:28:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:20.437 18:28:27 -- nvmf/common.sh@123 -- # set -e 00:12:20.437 18:28:27 -- nvmf/common.sh@124 -- # return 0 00:12:20.437 18:28:27 -- nvmf/common.sh@477 -- # '[' -n 73754 ']' 00:12:20.437 18:28:27 -- nvmf/common.sh@478 -- # killprocess 73754 00:12:20.437 18:28:27 -- common/autotest_common.sh@926 -- # '[' -z 73754 ']' 00:12:20.437 18:28:27 -- common/autotest_common.sh@930 -- # kill -0 73754 00:12:20.437 18:28:27 -- common/autotest_common.sh@931 -- # uname 00:12:20.437 18:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.437 18:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73754 00:12:20.437 killing process with pid 73754 00:12:20.437 18:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.437 18:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.437 18:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73754' 00:12:20.437 18:28:27 -- common/autotest_common.sh@945 -- # kill 73754 00:12:20.437 18:28:27 -- common/autotest_common.sh@950 -- # wait 73754 00:12:20.696 18:28:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:20.696 18:28:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:20.696 18:28:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:20.696 18:28:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.696 18:28:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:20.696 18:28:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.696 18:28:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.696 18:28:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.696 18:28:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:20.696 00:12:20.696 real 3m45.440s 00:12:20.696 user 14m35.030s 00:12:20.696 sys 0m25.375s 00:12:20.696 18:28:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.696 18:28:28 -- common/autotest_common.sh@10 -- # set +x 00:12:20.696 ************************************ 00:12:20.696 END TEST nvmf_connect_disconnect 00:12:20.696 ************************************ 00:12:20.696 18:28:28 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.696 18:28:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:20.696 18:28:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:20.696 18:28:28 -- common/autotest_common.sh@10 -- # set +x 00:12:20.696 ************************************ 00:12:20.696 START TEST nvmf_multitarget 00:12:20.696 ************************************ 00:12:20.696 18:28:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.954 * Looking for test storage... 00:12:20.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:20.954 18:28:28 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.954 18:28:28 -- nvmf/common.sh@7 -- # uname -s 00:12:20.954 18:28:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.954 18:28:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.954 18:28:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.954 18:28:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.954 18:28:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.954 18:28:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.954 18:28:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.954 18:28:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.954 18:28:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.954 18:28:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.954 18:28:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:20.954 18:28:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:12:20.954 18:28:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.954 18:28:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.954 18:28:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.954 18:28:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.954 18:28:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.954 18:28:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.954 18:28:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.954 18:28:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.954 18:28:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.955 18:28:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.955 18:28:28 -- paths/export.sh@5 -- # export PATH 00:12:20.955 18:28:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.955 18:28:28 -- nvmf/common.sh@46 -- # : 0 00:12:20.955 18:28:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:20.955 18:28:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:20.955 18:28:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:20.955 18:28:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.955 18:28:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.955 18:28:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:20.955 18:28:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:20.955 18:28:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:20.955 18:28:28 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.955 18:28:28 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:20.955 18:28:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:20.955 18:28:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.955 18:28:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:20.955 18:28:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:20.955 18:28:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:20.955 18:28:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.955 18:28:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.955 18:28:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.955 18:28:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:20.955 18:28:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:20.955 18:28:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:20.955 18:28:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:20.955 18:28:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:20.955 18:28:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:20.955 18:28:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.955 18:28:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.955 18:28:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:20.955 18:28:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:20.955 18:28:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.955 18:28:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.955 18:28:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.955 18:28:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.955 18:28:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.955 18:28:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.955 18:28:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.955 18:28:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.955 18:28:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:20.955 18:28:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:20.955 Cannot find device "nvmf_tgt_br" 00:12:20.955 18:28:28 -- nvmf/common.sh@154 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.955 Cannot find device "nvmf_tgt_br2" 00:12:20.955 18:28:28 -- nvmf/common.sh@155 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:20.955 18:28:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:20.955 Cannot find device "nvmf_tgt_br" 00:12:20.955 18:28:28 -- nvmf/common.sh@157 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:20.955 Cannot find device "nvmf_tgt_br2" 00:12:20.955 18:28:28 -- nvmf/common.sh@158 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:20.955 18:28:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:20.955 18:28:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.955 18:28:28 -- nvmf/common.sh@161 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.955 18:28:28 -- nvmf/common.sh@162 -- # true 00:12:20.955 18:28:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.955 18:28:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.955 18:28:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.214 18:28:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.214 18:28:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.214 18:28:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.214 18:28:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.214 18:28:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.214 18:28:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.214 18:28:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:21.214 18:28:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:21.214 18:28:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:21.214 18:28:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:21.214 18:28:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.214 18:28:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.214 18:28:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.214 18:28:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:21.214 18:28:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:21.214 18:28:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.214 18:28:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.214 18:28:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.214 18:28:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.214 18:28:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.214 18:28:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:21.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:21.214 00:12:21.214 --- 10.0.0.2 ping statistics --- 00:12:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.214 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:21.214 18:28:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:21.214 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.214 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:21.214 00:12:21.214 --- 10.0.0.3 ping statistics --- 00:12:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.214 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:21.214 18:28:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:21.214 00:12:21.214 --- 10.0.0.1 ping statistics --- 00:12:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.214 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:21.214 18:28:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.214 18:28:28 -- nvmf/common.sh@421 -- # return 0 00:12:21.214 18:28:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:21.214 18:28:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.214 18:28:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:21.214 18:28:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:21.214 18:28:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.214 18:28:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:21.214 18:28:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:21.214 18:28:28 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:21.214 18:28:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.214 18:28:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:21.214 18:28:28 -- common/autotest_common.sh@10 -- # set +x 00:12:21.214 18:28:28 -- nvmf/common.sh@469 -- # nvmfpid=77535 00:12:21.214 18:28:28 -- nvmf/common.sh@470 -- # waitforlisten 77535 00:12:21.214 18:28:28 -- common/autotest_common.sh@819 -- # '[' -z 77535 ']' 00:12:21.214 18:28:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.214 18:28:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.214 18:28:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:21.214 18:28:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.214 18:28:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:21.214 18:28:28 -- common/autotest_common.sh@10 -- # set +x 00:12:21.472 [2024-07-14 18:28:28.637870] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:21.472 [2024-07-14 18:28:28.637984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.472 [2024-07-14 18:28:28.777892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.472 [2024-07-14 18:28:28.845039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.472 [2024-07-14 18:28:28.845360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.472 [2024-07-14 18:28:28.845447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.472 [2024-07-14 18:28:28.845873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.472 [2024-07-14 18:28:28.846145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.472 [2024-07-14 18:28:28.846300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.472 [2024-07-14 18:28:28.846466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.472 [2024-07-14 18:28:28.846611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.407 18:28:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:22.407 18:28:29 -- common/autotest_common.sh@852 -- # return 0 00:12:22.407 18:28:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:22.407 18:28:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:22.407 18:28:29 -- common/autotest_common.sh@10 -- # set +x 00:12:22.407 18:28:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.407 18:28:29 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:22.407 18:28:29 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.407 18:28:29 -- target/multitarget.sh@21 -- # jq length 00:12:22.407 18:28:29 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:22.407 18:28:29 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:22.665 "nvmf_tgt_1" 00:12:22.665 18:28:29 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:22.665 "nvmf_tgt_2" 00:12:22.665 18:28:30 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.665 18:28:30 -- target/multitarget.sh@28 -- # jq length 00:12:22.922 18:28:30 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:22.922 18:28:30 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:22.922 true 00:12:22.922 18:28:30 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.181 true 00:12:23.181 18:28:30 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.181 18:28:30 -- target/multitarget.sh@35 -- # jq length 00:12:23.181 18:28:30 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:23.181 18:28:30 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:23.181 18:28:30 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:23.181 18:28:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:23.181 18:28:30 -- nvmf/common.sh@116 -- # sync 00:12:23.440 18:28:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:23.440 18:28:30 -- nvmf/common.sh@119 -- # set +e 00:12:23.440 18:28:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:23.440 18:28:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:23.440 rmmod nvme_tcp 00:12:23.440 rmmod nvme_fabrics 00:12:23.440 rmmod nvme_keyring 00:12:23.440 18:28:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:23.440 18:28:30 -- nvmf/common.sh@123 -- # set -e 00:12:23.440 18:28:30 -- nvmf/common.sh@124 -- # return 0 00:12:23.440 18:28:30 -- nvmf/common.sh@477 -- # '[' -n 77535 ']' 00:12:23.440 18:28:30 -- nvmf/common.sh@478 -- # killprocess 77535 00:12:23.440 18:28:30 -- common/autotest_common.sh@926 -- # '[' -z 77535 ']' 00:12:23.440 18:28:30 -- common/autotest_common.sh@930 -- # kill -0 77535 00:12:23.440 18:28:30 -- common/autotest_common.sh@931 -- # uname 00:12:23.440 18:28:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:23.440 18:28:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77535 00:12:23.440 18:28:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:23.440 18:28:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:23.440 killing process with pid 77535 00:12:23.440 18:28:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77535' 00:12:23.440 18:28:30 -- common/autotest_common.sh@945 -- # kill 77535 00:12:23.440 18:28:30 -- common/autotest_common.sh@950 -- # wait 77535 00:12:23.699 18:28:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:23.699 18:28:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:23.699 18:28:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:23.699 18:28:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.699 18:28:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:23.699 18:28:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.699 18:28:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.699 18:28:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.699 18:28:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:23.699 00:12:23.699 real 0m2.819s 00:12:23.699 user 0m9.242s 00:12:23.699 sys 0m0.729s 00:12:23.699 18:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.699 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:12:23.699 ************************************ 00:12:23.699 END TEST nvmf_multitarget 00:12:23.699 ************************************ 00:12:23.699 18:28:30 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.699 18:28:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:23.699 18:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.699 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:12:23.699 ************************************ 00:12:23.699 START TEST nvmf_rpc 00:12:23.699 ************************************ 00:12:23.699 18:28:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.700 * Looking for test storage... 00:12:23.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:23.700 18:28:31 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:23.700 18:28:31 -- nvmf/common.sh@7 -- # uname -s 00:12:23.700 18:28:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.700 18:28:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.700 18:28:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.700 18:28:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.700 18:28:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.700 18:28:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.700 18:28:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.700 18:28:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.700 18:28:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.700 18:28:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:23.700 18:28:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:12:23.700 18:28:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.700 18:28:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.700 18:28:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:23.700 18:28:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.700 18:28:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.700 18:28:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.700 18:28:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.700 18:28:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.700 18:28:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.700 18:28:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.700 18:28:31 -- paths/export.sh@5 -- # export PATH 00:12:23.700 18:28:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.700 18:28:31 -- nvmf/common.sh@46 -- # : 0 00:12:23.700 18:28:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:23.700 18:28:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:23.700 18:28:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:23.700 18:28:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.700 18:28:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.700 18:28:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:23.700 18:28:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:23.700 18:28:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:23.700 18:28:31 -- target/rpc.sh@11 -- # loops=5 00:12:23.700 18:28:31 -- target/rpc.sh@23 -- # nvmftestinit 00:12:23.700 18:28:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:23.700 18:28:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.700 18:28:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:23.700 18:28:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:23.700 18:28:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:23.700 18:28:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.700 18:28:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.700 18:28:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.700 18:28:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:23.700 18:28:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:23.700 18:28:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.700 18:28:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.700 18:28:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:23.700 18:28:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:23.700 18:28:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:23.700 18:28:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:23.700 18:28:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:23.700 18:28:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.700 18:28:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:23.700 18:28:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:23.700 18:28:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:23.700 18:28:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:23.700 18:28:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:23.700 18:28:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:23.700 Cannot find device "nvmf_tgt_br" 00:12:23.700 18:28:31 -- nvmf/common.sh@154 -- # true 00:12:23.700 18:28:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.700 Cannot find device "nvmf_tgt_br2" 00:12:23.700 18:28:31 -- nvmf/common.sh@155 -- # true 00:12:23.700 18:28:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:23.959 18:28:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:23.959 Cannot find device "nvmf_tgt_br" 00:12:23.959 18:28:31 -- nvmf/common.sh@157 -- # true 00:12:23.959 18:28:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:23.959 Cannot find device "nvmf_tgt_br2" 00:12:23.959 18:28:31 -- nvmf/common.sh@158 -- # true 00:12:23.959 18:28:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:23.959 18:28:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:23.959 18:28:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:23.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.959 18:28:31 -- nvmf/common.sh@161 -- # true 00:12:23.959 18:28:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:23.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.959 18:28:31 -- nvmf/common.sh@162 -- # true 00:12:23.959 18:28:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:23.959 18:28:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:23.959 18:28:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:23.959 18:28:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:23.959 18:28:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.959 18:28:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.959 18:28:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.959 18:28:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:23.959 18:28:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:23.959 18:28:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:23.959 18:28:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:23.959 18:28:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:23.959 18:28:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:23.959 18:28:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.960 18:28:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.960 18:28:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.960 18:28:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:23.960 18:28:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:23.960 18:28:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.960 18:28:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.960 18:28:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.960 18:28:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.960 18:28:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.960 18:28:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:23.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:23.960 00:12:23.960 --- 10.0.0.2 ping statistics --- 00:12:23.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.960 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:23.960 18:28:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:24.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:12:24.219 00:12:24.219 --- 10.0.0.3 ping statistics --- 00:12:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.219 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:24.219 18:28:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:24.219 00:12:24.219 --- 10.0.0.1 ping statistics --- 00:12:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.219 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:24.219 18:28:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.219 18:28:31 -- nvmf/common.sh@421 -- # return 0 00:12:24.219 18:28:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:24.219 18:28:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.219 18:28:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:24.219 18:28:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:24.219 18:28:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.219 18:28:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:24.219 18:28:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:24.219 18:28:31 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:24.219 18:28:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.219 18:28:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:24.219 18:28:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.219 18:28:31 -- nvmf/common.sh@469 -- # nvmfpid=77768 00:12:24.219 18:28:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.219 18:28:31 -- nvmf/common.sh@470 -- # waitforlisten 77768 00:12:24.219 18:28:31 -- common/autotest_common.sh@819 -- # '[' -z 77768 ']' 00:12:24.219 18:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.219 18:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.219 18:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.219 18:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.219 18:28:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.219 [2024-07-14 18:28:31.472654] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:24.219 [2024-07-14 18:28:31.472753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.219 [2024-07-14 18:28:31.615077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.478 [2024-07-14 18:28:31.684045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.478 [2024-07-14 18:28:31.684223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.478 [2024-07-14 18:28:31.684236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.478 [2024-07-14 18:28:31.684245] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.478 [2024-07-14 18:28:31.684417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.478 [2024-07-14 18:28:31.684857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.478 [2024-07-14 18:28:31.685065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.478 [2024-07-14 18:28:31.685072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.044 18:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.044 18:28:32 -- common/autotest_common.sh@852 -- # return 0 00:12:25.044 18:28:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:25.044 18:28:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:25.044 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 18:28:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.044 18:28:32 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:25.044 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.044 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.302 18:28:32 -- target/rpc.sh@26 -- # stats='{ 00:12:25.302 "poll_groups": [ 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_0", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [] 00:12:25.302 }, 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_1", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [] 00:12:25.302 }, 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_2", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [] 00:12:25.302 }, 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_3", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [] 00:12:25.302 } 00:12:25.302 ], 00:12:25.302 "tick_rate": 2200000000 00:12:25.302 }' 00:12:25.302 18:28:32 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:25.302 18:28:32 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:25.302 18:28:32 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:25.302 18:28:32 -- target/rpc.sh@15 -- # wc -l 00:12:25.302 18:28:32 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:25.302 18:28:32 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:25.302 18:28:32 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:25.302 18:28:32 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.302 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.302 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 [2024-07-14 18:28:32.578988] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.302 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.302 18:28:32 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:25.302 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.302 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.302 18:28:32 -- target/rpc.sh@33 -- # stats='{ 00:12:25.302 "poll_groups": [ 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_0", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [ 00:12:25.302 { 00:12:25.302 "trtype": "TCP" 00:12:25.302 } 00:12:25.302 ] 00:12:25.302 }, 00:12:25.302 { 00:12:25.302 "admin_qpairs": 0, 00:12:25.302 "completed_nvme_io": 0, 00:12:25.302 "current_admin_qpairs": 0, 00:12:25.302 "current_io_qpairs": 0, 00:12:25.302 "io_qpairs": 0, 00:12:25.302 "name": "nvmf_tgt_poll_group_1", 00:12:25.302 "pending_bdev_io": 0, 00:12:25.302 "transports": [ 00:12:25.302 { 00:12:25.302 "trtype": "TCP" 00:12:25.303 } 00:12:25.303 ] 00:12:25.303 }, 00:12:25.303 { 00:12:25.303 "admin_qpairs": 0, 00:12:25.303 "completed_nvme_io": 0, 00:12:25.303 "current_admin_qpairs": 0, 00:12:25.303 "current_io_qpairs": 0, 00:12:25.303 "io_qpairs": 0, 00:12:25.303 "name": "nvmf_tgt_poll_group_2", 00:12:25.303 "pending_bdev_io": 0, 00:12:25.303 "transports": [ 00:12:25.303 { 00:12:25.303 "trtype": "TCP" 00:12:25.303 } 00:12:25.303 ] 00:12:25.303 }, 00:12:25.303 { 00:12:25.303 "admin_qpairs": 0, 00:12:25.303 "completed_nvme_io": 0, 00:12:25.303 "current_admin_qpairs": 0, 00:12:25.303 "current_io_qpairs": 0, 00:12:25.303 "io_qpairs": 0, 00:12:25.303 "name": "nvmf_tgt_poll_group_3", 00:12:25.303 "pending_bdev_io": 0, 00:12:25.303 "transports": [ 00:12:25.303 { 00:12:25.303 "trtype": "TCP" 00:12:25.303 } 00:12:25.303 ] 00:12:25.303 } 00:12:25.303 ], 00:12:25.303 "tick_rate": 2200000000 00:12:25.303 }' 00:12:25.303 18:28:32 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:25.303 18:28:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:25.303 18:28:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:25.303 18:28:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.303 18:28:32 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:25.303 18:28:32 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:25.303 18:28:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:25.303 18:28:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.303 18:28:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:25.561 18:28:32 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:25.561 18:28:32 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:25.561 18:28:32 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:25.561 18:28:32 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:25.561 18:28:32 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 Malloc1 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.561 18:28:32 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.561 18:28:32 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.561 18:28:32 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.561 18:28:32 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 [2024-07-14 18:28:32.788841] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.561 18:28:32 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db -a 10.0.0.2 -s 4420 00:12:25.561 18:28:32 -- common/autotest_common.sh@640 -- # local es=0 00:12:25.561 18:28:32 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db -a 10.0.0.2 -s 4420 00:12:25.561 18:28:32 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:25.561 18:28:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.561 18:28:32 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:25.561 18:28:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.561 18:28:32 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:25.561 18:28:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.561 18:28:32 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:25.561 18:28:32 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:25.561 18:28:32 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db -a 10.0.0.2 -s 4420 00:12:25.561 [2024-07-14 18:28:32.817098] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db' 00:12:25.561 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:25.561 could not add new controller: failed to write to nvme-fabrics device 00:12:25.561 18:28:32 -- common/autotest_common.sh@643 -- # es=1 00:12:25.561 18:28:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:25.561 18:28:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:25.561 18:28:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:25.561 18:28:32 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:25.561 18:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.561 18:28:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 18:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.562 18:28:32 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.820 18:28:32 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.820 18:28:32 -- common/autotest_common.sh@1177 -- # local i=0 00:12:25.820 18:28:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.820 18:28:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:25.820 18:28:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:27.717 18:28:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:27.717 18:28:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:27.717 18:28:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.717 18:28:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:27.717 18:28:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.717 18:28:35 -- common/autotest_common.sh@1187 -- # return 0 00:12:27.717 18:28:35 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.975 18:28:35 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.975 18:28:35 -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.975 18:28:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:27.975 18:28:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.975 18:28:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.975 18:28:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:27.975 18:28:35 -- common/autotest_common.sh@1210 -- # return 0 00:12:27.975 18:28:35 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:27.975 18:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.975 18:28:35 -- common/autotest_common.sh@10 -- # set +x 00:12:27.975 18:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.975 18:28:35 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.975 18:28:35 -- common/autotest_common.sh@640 -- # local es=0 00:12:27.975 18:28:35 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.975 18:28:35 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:27.975 18:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.975 18:28:35 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:27.975 18:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.975 18:28:35 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:27.975 18:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.975 18:28:35 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:27.975 18:28:35 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:27.975 18:28:35 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.975 [2024-07-14 18:28:35.218212] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db' 00:12:27.975 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:27.975 could not add new controller: failed to write to nvme-fabrics device 00:12:27.975 18:28:35 -- common/autotest_common.sh@643 -- # es=1 00:12:27.975 18:28:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:27.975 18:28:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:27.975 18:28:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:27.975 18:28:35 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:27.975 18:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.975 18:28:35 -- common/autotest_common.sh@10 -- # set +x 00:12:27.975 18:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.975 18:28:35 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.975 18:28:35 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.975 18:28:35 -- common/autotest_common.sh@1177 -- # local i=0 00:12:27.975 18:28:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.975 18:28:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:27.975 18:28:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:30.505 18:28:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:30.505 18:28:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:30.505 18:28:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.505 18:28:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:30.505 18:28:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.505 18:28:37 -- common/autotest_common.sh@1187 -- # return 0 00:12:30.505 18:28:37 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.505 18:28:37 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.505 18:28:37 -- common/autotest_common.sh@1198 -- # local i=0 00:12:30.506 18:28:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:30.506 18:28:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.506 18:28:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:30.506 18:28:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.506 18:28:37 -- common/autotest_common.sh@1210 -- # return 0 00:12:30.506 18:28:37 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.506 18:28:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.506 18:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.506 18:28:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.506 18:28:37 -- target/rpc.sh@81 -- # seq 1 5 00:12:30.506 18:28:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.506 18:28:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.506 18:28:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.506 18:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.506 18:28:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.506 18:28:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.506 18:28:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.506 18:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.506 [2024-07-14 18:28:37.613336] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.506 18:28:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.506 18:28:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.506 18:28:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.506 18:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.506 18:28:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.506 18:28:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.506 18:28:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.506 18:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.506 18:28:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.506 18:28:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.506 18:28:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.506 18:28:37 -- common/autotest_common.sh@1177 -- # local i=0 00:12:30.506 18:28:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.506 18:28:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:30.506 18:28:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:32.407 18:28:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:32.407 18:28:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:32.407 18:28:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.407 18:28:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:32.407 18:28:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.407 18:28:39 -- common/autotest_common.sh@1187 -- # return 0 00:12:32.407 18:28:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.665 18:28:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.665 18:28:39 -- common/autotest_common.sh@1198 -- # local i=0 00:12:32.665 18:28:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:32.665 18:28:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.665 18:28:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:32.665 18:28:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.665 18:28:39 -- common/autotest_common.sh@1210 -- # return 0 00:12:32.665 18:28:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.665 18:28:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 [2024-07-14 18:28:39.914918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.665 18:28:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.665 18:28:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.665 18:28:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.665 18:28:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.923 18:28:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.923 18:28:40 -- common/autotest_common.sh@1177 -- # local i=0 00:12:32.923 18:28:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.923 18:28:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:32.923 18:28:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:34.822 18:28:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:34.822 18:28:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:34.822 18:28:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.822 18:28:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:34.822 18:28:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.822 18:28:42 -- common/autotest_common.sh@1187 -- # return 0 00:12:34.822 18:28:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.822 18:28:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.822 18:28:42 -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.822 18:28:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:34.822 18:28:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.822 18:28:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:34.822 18:28:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.822 18:28:42 -- common/autotest_common.sh@1210 -- # return 0 00:12:34.822 18:28:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.822 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.822 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.822 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.823 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.823 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.823 18:28:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.823 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.823 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.823 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.823 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 [2024-07-14 18:28:42.216197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.823 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.823 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.823 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.823 18:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.823 18:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 18:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.823 18:28:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.092 18:28:42 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.092 18:28:42 -- common/autotest_common.sh@1177 -- # local i=0 00:12:35.092 18:28:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.092 18:28:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:35.092 18:28:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:37.007 18:28:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:37.007 18:28:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:37.007 18:28:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.007 18:28:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:37.007 18:28:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.007 18:28:44 -- common/autotest_common.sh@1187 -- # return 0 00:12:37.007 18:28:44 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.265 18:28:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.265 18:28:44 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.265 18:28:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.265 18:28:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.265 18:28:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.265 18:28:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.265 18:28:44 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.265 18:28:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.265 18:28:44 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 [2024-07-14 18:28:44.529708] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.265 18:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.265 18:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.265 18:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.265 18:28:44 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.523 18:28:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.523 18:28:44 -- common/autotest_common.sh@1177 -- # local i=0 00:12:37.523 18:28:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.523 18:28:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:37.523 18:28:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:39.423 18:28:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:39.423 18:28:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:39.423 18:28:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.423 18:28:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:39.423 18:28:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.423 18:28:46 -- common/autotest_common.sh@1187 -- # return 0 00:12:39.423 18:28:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.681 18:28:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.681 18:28:46 -- common/autotest_common.sh@1198 -- # local i=0 00:12:39.681 18:28:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:39.681 18:28:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.681 18:28:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:39.681 18:28:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.681 18:28:46 -- common/autotest_common.sh@1210 -- # return 0 00:12:39.681 18:28:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.681 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.681 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.681 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.681 18:28:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.681 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.681 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.681 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.681 18:28:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.681 18:28:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.681 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.681 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.681 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.681 18:28:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.681 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.681 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.681 [2024-07-14 18:28:46.947824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.681 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.681 18:28:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.682 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.682 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.682 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.682 18:28:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.682 18:28:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.682 18:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.682 18:28:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.682 18:28:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.940 18:28:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.940 18:28:47 -- common/autotest_common.sh@1177 -- # local i=0 00:12:39.940 18:28:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.940 18:28:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:39.940 18:28:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:41.839 18:28:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:41.839 18:28:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:41.839 18:28:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.839 18:28:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:41.839 18:28:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.839 18:28:49 -- common/autotest_common.sh@1187 -- # return 0 00:12:41.839 18:28:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.839 18:28:49 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.839 18:28:49 -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.839 18:28:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:41.839 18:28:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.839 18:28:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:41.839 18:28:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.839 18:28:49 -- common/autotest_common.sh@1210 -- # return 0 00:12:41.839 18:28:49 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.839 18:28:49 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.839 18:28:49 -- target/rpc.sh@99 -- # seq 1 5 00:12:41.839 18:28:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.839 18:28:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.839 18:28:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 [2024-07-14 18:28:49.244947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.839 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.839 18:28:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.839 18:28:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.839 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.839 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.098 18:28:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 [2024-07-14 18:28:49.292909] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.098 18:28:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 [2024-07-14 18:28:49.340989] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.098 18:28:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 [2024-07-14 18:28:49.389053] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.098 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.098 18:28:49 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.098 18:28:49 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.098 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.098 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 [2024-07-14 18:28:49.437156] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:42.099 18:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.099 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.099 18:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.099 18:28:49 -- target/rpc.sh@110 -- # stats='{ 00:12:42.099 "poll_groups": [ 00:12:42.099 { 00:12:42.099 "admin_qpairs": 2, 00:12:42.099 "completed_nvme_io": 166, 00:12:42.099 "current_admin_qpairs": 0, 00:12:42.099 "current_io_qpairs": 0, 00:12:42.099 "io_qpairs": 16, 00:12:42.099 "name": "nvmf_tgt_poll_group_0", 00:12:42.099 "pending_bdev_io": 0, 00:12:42.099 "transports": [ 00:12:42.099 { 00:12:42.099 "trtype": "TCP" 00:12:42.099 } 00:12:42.099 ] 00:12:42.099 }, 00:12:42.099 { 00:12:42.099 "admin_qpairs": 3, 00:12:42.099 "completed_nvme_io": 67, 00:12:42.099 "current_admin_qpairs": 0, 00:12:42.099 "current_io_qpairs": 0, 00:12:42.099 "io_qpairs": 17, 00:12:42.099 "name": "nvmf_tgt_poll_group_1", 00:12:42.099 "pending_bdev_io": 0, 00:12:42.099 "transports": [ 00:12:42.099 { 00:12:42.099 "trtype": "TCP" 00:12:42.099 } 00:12:42.099 ] 00:12:42.099 }, 00:12:42.099 { 00:12:42.099 "admin_qpairs": 1, 00:12:42.099 "completed_nvme_io": 69, 00:12:42.099 "current_admin_qpairs": 0, 00:12:42.099 "current_io_qpairs": 0, 00:12:42.099 "io_qpairs": 19, 00:12:42.099 "name": "nvmf_tgt_poll_group_2", 00:12:42.099 "pending_bdev_io": 0, 00:12:42.099 "transports": [ 00:12:42.099 { 00:12:42.099 "trtype": "TCP" 00:12:42.099 } 00:12:42.099 ] 00:12:42.099 }, 00:12:42.099 { 00:12:42.099 "admin_qpairs": 1, 00:12:42.099 "completed_nvme_io": 118, 00:12:42.099 "current_admin_qpairs": 0, 00:12:42.099 "current_io_qpairs": 0, 00:12:42.099 "io_qpairs": 18, 00:12:42.099 "name": "nvmf_tgt_poll_group_3", 00:12:42.099 "pending_bdev_io": 0, 00:12:42.099 "transports": [ 00:12:42.099 { 00:12:42.099 "trtype": "TCP" 00:12:42.099 } 00:12:42.099 ] 00:12:42.099 } 00:12:42.099 ], 00:12:42.099 "tick_rate": 2200000000 00:12:42.099 }' 00:12:42.099 18:28:49 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:42.099 18:28:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:42.099 18:28:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:42.099 18:28:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.357 18:28:49 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:42.357 18:28:49 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:42.357 18:28:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:42.357 18:28:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.357 18:28:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:42.357 18:28:49 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:42.357 18:28:49 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:42.357 18:28:49 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:42.357 18:28:49 -- target/rpc.sh@123 -- # nvmftestfini 00:12:42.357 18:28:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:42.357 18:28:49 -- nvmf/common.sh@116 -- # sync 00:12:42.357 18:28:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:42.357 18:28:49 -- nvmf/common.sh@119 -- # set +e 00:12:42.357 18:28:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:42.357 18:28:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:42.357 rmmod nvme_tcp 00:12:42.357 rmmod nvme_fabrics 00:12:42.357 rmmod nvme_keyring 00:12:42.357 18:28:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:42.357 18:28:49 -- nvmf/common.sh@123 -- # set -e 00:12:42.357 18:28:49 -- nvmf/common.sh@124 -- # return 0 00:12:42.357 18:28:49 -- nvmf/common.sh@477 -- # '[' -n 77768 ']' 00:12:42.357 18:28:49 -- nvmf/common.sh@478 -- # killprocess 77768 00:12:42.357 18:28:49 -- common/autotest_common.sh@926 -- # '[' -z 77768 ']' 00:12:42.357 18:28:49 -- common/autotest_common.sh@930 -- # kill -0 77768 00:12:42.357 18:28:49 -- common/autotest_common.sh@931 -- # uname 00:12:42.357 18:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:42.357 18:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77768 00:12:42.357 killing process with pid 77768 00:12:42.357 18:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:42.357 18:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:42.357 18:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77768' 00:12:42.357 18:28:49 -- common/autotest_common.sh@945 -- # kill 77768 00:12:42.357 18:28:49 -- common/autotest_common.sh@950 -- # wait 77768 00:12:42.614 18:28:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:42.614 18:28:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:42.614 18:28:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:42.614 18:28:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.614 18:28:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:42.614 18:28:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.614 18:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.614 18:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.614 18:28:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:42.614 ************************************ 00:12:42.614 END TEST nvmf_rpc 00:12:42.614 ************************************ 00:12:42.614 00:12:42.614 real 0m19.007s 00:12:42.614 user 1m12.012s 00:12:42.614 sys 0m2.257s 00:12:42.614 18:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.614 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.614 18:28:50 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:42.614 18:28:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:42.614 18:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:42.614 18:28:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.614 ************************************ 00:12:42.614 START TEST nvmf_invalid 00:12:42.614 ************************************ 00:12:42.614 18:28:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:42.879 * Looking for test storage... 00:12:42.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:42.879 18:28:50 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:42.879 18:28:50 -- nvmf/common.sh@7 -- # uname -s 00:12:42.879 18:28:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.879 18:28:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.879 18:28:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.879 18:28:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.879 18:28:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.879 18:28:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.879 18:28:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.879 18:28:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.879 18:28:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.879 18:28:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.879 18:28:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:42.879 18:28:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:12:42.879 18:28:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.879 18:28:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.879 18:28:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:42.879 18:28:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:42.879 18:28:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.879 18:28:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.879 18:28:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.879 18:28:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.879 18:28:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.880 18:28:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.880 18:28:50 -- paths/export.sh@5 -- # export PATH 00:12:42.880 18:28:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.880 18:28:50 -- nvmf/common.sh@46 -- # : 0 00:12:42.880 18:28:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:42.880 18:28:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:42.880 18:28:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:42.880 18:28:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.880 18:28:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.880 18:28:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:42.880 18:28:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:42.880 18:28:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:42.880 18:28:50 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:42.880 18:28:50 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.880 18:28:50 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:42.880 18:28:50 -- target/invalid.sh@14 -- # target=foobar 00:12:42.880 18:28:50 -- target/invalid.sh@16 -- # RANDOM=0 00:12:42.880 18:28:50 -- target/invalid.sh@34 -- # nvmftestinit 00:12:42.880 18:28:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:42.880 18:28:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.880 18:28:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:42.880 18:28:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:42.880 18:28:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:42.880 18:28:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.880 18:28:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.880 18:28:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.880 18:28:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:42.880 18:28:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:42.880 18:28:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:42.880 18:28:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:42.880 18:28:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:42.880 18:28:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:42.880 18:28:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.880 18:28:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.880 18:28:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:42.880 18:28:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:42.880 18:28:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:42.880 18:28:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:42.880 18:28:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:42.880 18:28:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.880 18:28:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:42.880 18:28:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:42.880 18:28:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:42.880 18:28:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:42.880 18:28:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:42.880 18:28:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:42.880 Cannot find device "nvmf_tgt_br" 00:12:42.880 18:28:50 -- nvmf/common.sh@154 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.880 Cannot find device "nvmf_tgt_br2" 00:12:42.880 18:28:50 -- nvmf/common.sh@155 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:42.880 18:28:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:42.880 Cannot find device "nvmf_tgt_br" 00:12:42.880 18:28:50 -- nvmf/common.sh@157 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:42.880 Cannot find device "nvmf_tgt_br2" 00:12:42.880 18:28:50 -- nvmf/common.sh@158 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:42.880 18:28:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:42.880 18:28:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.880 18:28:50 -- nvmf/common.sh@161 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.880 18:28:50 -- nvmf/common.sh@162 -- # true 00:12:42.880 18:28:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:42.880 18:28:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:42.880 18:28:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:42.880 18:28:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:42.880 18:28:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.139 18:28:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.139 18:28:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.139 18:28:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:43.139 18:28:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:43.139 18:28:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:43.139 18:28:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:43.139 18:28:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:43.139 18:28:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:43.139 18:28:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.139 18:28:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.139 18:28:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.139 18:28:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:43.139 18:28:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:43.139 18:28:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.139 18:28:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.139 18:28:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.139 18:28:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.139 18:28:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.139 18:28:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:43.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:43.139 00:12:43.139 --- 10.0.0.2 ping statistics --- 00:12:43.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.139 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:43.139 18:28:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:43.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:12:43.139 00:12:43.139 --- 10.0.0.3 ping statistics --- 00:12:43.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.139 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:43.139 18:28:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:43.139 00:12:43.139 --- 10.0.0.1 ping statistics --- 00:12:43.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.139 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:43.139 18:28:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.139 18:28:50 -- nvmf/common.sh@421 -- # return 0 00:12:43.139 18:28:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:43.139 18:28:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.139 18:28:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:43.139 18:28:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:43.139 18:28:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.139 18:28:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:43.139 18:28:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:43.139 18:28:50 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:43.139 18:28:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:43.139 18:28:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:43.139 18:28:50 -- common/autotest_common.sh@10 -- # set +x 00:12:43.139 18:28:50 -- nvmf/common.sh@469 -- # nvmfpid=78274 00:12:43.139 18:28:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.139 18:28:50 -- nvmf/common.sh@470 -- # waitforlisten 78274 00:12:43.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.139 18:28:50 -- common/autotest_common.sh@819 -- # '[' -z 78274 ']' 00:12:43.139 18:28:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.139 18:28:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:43.139 18:28:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.139 18:28:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:43.139 18:28:50 -- common/autotest_common.sh@10 -- # set +x 00:12:43.139 [2024-07-14 18:28:50.537793] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:43.139 [2024-07-14 18:28:50.537879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.397 [2024-07-14 18:28:50.673682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.397 [2024-07-14 18:28:50.741653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:43.397 [2024-07-14 18:28:50.742012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.397 [2024-07-14 18:28:50.742119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.397 [2024-07-14 18:28:50.742298] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.397 [2024-07-14 18:28:50.742540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.397 [2024-07-14 18:28:50.742729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.397 [2024-07-14 18:28:50.742959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.397 [2024-07-14 18:28:50.742980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.329 18:28:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:44.329 18:28:51 -- common/autotest_common.sh@852 -- # return 0 00:12:44.329 18:28:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:44.329 18:28:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:44.329 18:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.329 18:28:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.329 18:28:51 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:44.329 18:28:51 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18812 00:12:44.329 [2024-07-14 18:28:51.750414] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:44.587 18:28:51 -- target/invalid.sh@40 -- # out='2024/07/14 18:28:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18812 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:44.587 request: 00:12:44.587 { 00:12:44.587 "method": "nvmf_create_subsystem", 00:12:44.587 "params": { 00:12:44.587 "nqn": "nqn.2016-06.io.spdk:cnode18812", 00:12:44.587 "tgt_name": "foobar" 00:12:44.587 } 00:12:44.587 } 00:12:44.587 Got JSON-RPC error response 00:12:44.587 GoRPCClient: error on JSON-RPC call' 00:12:44.587 18:28:51 -- target/invalid.sh@41 -- # [[ 2024/07/14 18:28:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18812 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:44.587 request: 00:12:44.587 { 00:12:44.587 "method": "nvmf_create_subsystem", 00:12:44.587 "params": { 00:12:44.587 "nqn": "nqn.2016-06.io.spdk:cnode18812", 00:12:44.587 "tgt_name": "foobar" 00:12:44.587 } 00:12:44.587 } 00:12:44.587 Got JSON-RPC error response 00:12:44.587 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:44.587 18:28:51 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:44.587 18:28:51 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27629 00:12:44.845 [2024-07-14 18:28:52.042911] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27629: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:44.845 18:28:52 -- target/invalid.sh@45 -- # out='2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27629 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:44.845 request: 00:12:44.845 { 00:12:44.845 "method": "nvmf_create_subsystem", 00:12:44.845 "params": { 00:12:44.845 "nqn": "nqn.2016-06.io.spdk:cnode27629", 00:12:44.845 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:44.845 } 00:12:44.845 } 00:12:44.845 Got JSON-RPC error response 00:12:44.845 GoRPCClient: error on JSON-RPC call' 00:12:44.845 18:28:52 -- target/invalid.sh@46 -- # [[ 2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27629 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:44.845 request: 00:12:44.845 { 00:12:44.845 "method": "nvmf_create_subsystem", 00:12:44.845 "params": { 00:12:44.845 "nqn": "nqn.2016-06.io.spdk:cnode27629", 00:12:44.845 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:44.845 } 00:12:44.845 } 00:12:44.845 Got JSON-RPC error response 00:12:44.845 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:44.845 18:28:52 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:44.845 18:28:52 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17280 00:12:45.104 [2024-07-14 18:28:52.315188] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17280: invalid model number 'SPDK_Controller' 00:12:45.104 18:28:52 -- target/invalid.sh@50 -- # out='2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17280], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:45.104 request: 00:12:45.104 { 00:12:45.104 "method": "nvmf_create_subsystem", 00:12:45.104 "params": { 00:12:45.104 "nqn": "nqn.2016-06.io.spdk:cnode17280", 00:12:45.104 "model_number": "SPDK_Controller\u001f" 00:12:45.104 } 00:12:45.104 } 00:12:45.104 Got JSON-RPC error response 00:12:45.104 GoRPCClient: error on JSON-RPC call' 00:12:45.104 18:28:52 -- target/invalid.sh@51 -- # [[ 2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17280], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:45.104 request: 00:12:45.104 { 00:12:45.104 "method": "nvmf_create_subsystem", 00:12:45.104 "params": { 00:12:45.104 "nqn": "nqn.2016-06.io.spdk:cnode17280", 00:12:45.104 "model_number": "SPDK_Controller\u001f" 00:12:45.104 } 00:12:45.104 } 00:12:45.104 Got JSON-RPC error response 00:12:45.104 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:45.104 18:28:52 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:45.104 18:28:52 -- target/invalid.sh@19 -- # local length=21 ll 00:12:45.104 18:28:52 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:45.104 18:28:52 -- target/invalid.sh@21 -- # local chars 00:12:45.104 18:28:52 -- target/invalid.sh@22 -- # local string 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 110 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=n 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 68 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=D 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 74 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=J 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 115 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=s 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 121 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=y 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 73 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=I 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 33 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+='!' 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 81 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=Q 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 69 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=E 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 38 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+='&' 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 37 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=% 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 44 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=, 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 124 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+='|' 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 88 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=X 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 122 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=z 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.104 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # printf %x 105 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:45.104 18:28:52 -- target/invalid.sh@25 -- # string+=i 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # printf %x 41 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # string+=')' 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # printf %x 63 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # string+='?' 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # printf %x 115 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # string+=s 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # printf %x 75 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # string+=K 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # printf %x 114 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:45.105 18:28:52 -- target/invalid.sh@25 -- # string+=r 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.105 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.105 18:28:52 -- target/invalid.sh@28 -- # [[ n == \- ]] 00:12:45.105 18:28:52 -- target/invalid.sh@31 -- # echo 'nDJsyI!QE&%,|Xzi)?sKr' 00:12:45.105 18:28:52 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'nDJsyI!QE&%,|Xzi)?sKr' nqn.2016-06.io.spdk:cnode29405 00:12:45.363 [2024-07-14 18:28:52.691731] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29405: invalid serial number 'nDJsyI!QE&%,|Xzi)?sKr' 00:12:45.363 18:28:52 -- target/invalid.sh@54 -- # out='2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29405 serial_number:nDJsyI!QE&%,|Xzi)?sKr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN nDJsyI!QE&%,|Xzi)?sKr 00:12:45.363 request: 00:12:45.363 { 00:12:45.363 "method": "nvmf_create_subsystem", 00:12:45.363 "params": { 00:12:45.363 "nqn": "nqn.2016-06.io.spdk:cnode29405", 00:12:45.363 "serial_number": "nDJsyI!QE&%,|Xzi)?sKr" 00:12:45.363 } 00:12:45.363 } 00:12:45.363 Got JSON-RPC error response 00:12:45.363 GoRPCClient: error on JSON-RPC call' 00:12:45.363 18:28:52 -- target/invalid.sh@55 -- # [[ 2024/07/14 18:28:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29405 serial_number:nDJsyI!QE&%,|Xzi)?sKr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN nDJsyI!QE&%,|Xzi)?sKr 00:12:45.363 request: 00:12:45.363 { 00:12:45.363 "method": "nvmf_create_subsystem", 00:12:45.363 "params": { 00:12:45.363 "nqn": "nqn.2016-06.io.spdk:cnode29405", 00:12:45.363 "serial_number": "nDJsyI!QE&%,|Xzi)?sKr" 00:12:45.363 } 00:12:45.363 } 00:12:45.363 Got JSON-RPC error response 00:12:45.363 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:45.363 18:28:52 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:45.363 18:28:52 -- target/invalid.sh@19 -- # local length=41 ll 00:12:45.363 18:28:52 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:45.363 18:28:52 -- target/invalid.sh@21 -- # local chars 00:12:45.363 18:28:52 -- target/invalid.sh@22 -- # local string 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 98 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=b 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 107 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=k 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 114 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=r 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 113 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=q 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 76 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=L 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 40 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+='(' 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 89 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=Y 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 125 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+='}' 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 98 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=b 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 89 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=Y 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 62 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+='>' 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # printf %x 112 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:45.363 18:28:52 -- target/invalid.sh@25 -- # string+=p 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.363 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # printf %x 87 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # string+=W 00:12:45.364 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.364 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # printf %x 69 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:45.364 18:28:52 -- target/invalid.sh@25 -- # string+=E 00:12:45.364 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.364 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 80 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=P 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 73 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=I 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 55 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=7 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 117 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=u 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 59 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=';' 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 81 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=Q 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 73 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=I 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 124 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+='|' 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 63 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+='?' 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 54 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=6 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 114 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=r 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 95 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=_ 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 93 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=']' 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 90 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=Z 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 100 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=d 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 88 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=X 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 44 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=, 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 83 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=S 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 87 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=W 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 68 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=D 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 85 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=U 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 119 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=w 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 95 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=_ 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 100 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=d 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 100 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=d 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 77 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=M 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # printf %x 67 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:45.622 18:28:52 -- target/invalid.sh@25 -- # string+=C 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.622 18:28:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.622 18:28:52 -- target/invalid.sh@28 -- # [[ b == \- ]] 00:12:45.622 18:28:52 -- target/invalid.sh@31 -- # echo 'bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC' 00:12:45.622 18:28:52 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC' nqn.2016-06.io.spdk:cnode29086 00:12:45.880 [2024-07-14 18:28:53.172457] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29086: invalid model number 'bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC' 00:12:45.880 18:28:53 -- target/invalid.sh@58 -- # out='2024/07/14 18:28:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC nqn:nqn.2016-06.io.spdk:cnode29086], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC 00:12:45.880 request: 00:12:45.880 { 00:12:45.880 "method": "nvmf_create_subsystem", 00:12:45.880 "params": { 00:12:45.880 "nqn": "nqn.2016-06.io.spdk:cnode29086", 00:12:45.880 "model_number": "bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC" 00:12:45.880 } 00:12:45.880 } 00:12:45.880 Got JSON-RPC error response 00:12:45.880 GoRPCClient: error on JSON-RPC call' 00:12:45.880 18:28:53 -- target/invalid.sh@59 -- # [[ 2024/07/14 18:28:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC nqn:nqn.2016-06.io.spdk:cnode29086], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC 00:12:45.880 request: 00:12:45.880 { 00:12:45.880 "method": "nvmf_create_subsystem", 00:12:45.880 "params": { 00:12:45.880 "nqn": "nqn.2016-06.io.spdk:cnode29086", 00:12:45.880 "model_number": "bkrqL(Y}bY>pWEPI7u;QI|?6r_]ZdX,SWDUw_ddMC" 00:12:45.880 } 00:12:45.880 } 00:12:45.880 Got JSON-RPC error response 00:12:45.880 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:45.880 18:28:53 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:46.137 [2024-07-14 18:28:53.424850] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.137 18:28:53 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:46.394 18:28:53 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:46.394 18:28:53 -- target/invalid.sh@67 -- # echo '' 00:12:46.394 18:28:53 -- target/invalid.sh@67 -- # head -n 1 00:12:46.394 18:28:53 -- target/invalid.sh@67 -- # IP= 00:12:46.394 18:28:53 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:46.651 [2024-07-14 18:28:53.947099] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:46.651 18:28:53 -- target/invalid.sh@69 -- # out='2024/07/14 18:28:53 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:46.651 request: 00:12:46.651 { 00:12:46.651 "method": "nvmf_subsystem_remove_listener", 00:12:46.651 "params": { 00:12:46.651 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:46.651 "listen_address": { 00:12:46.651 "trtype": "tcp", 00:12:46.651 "traddr": "", 00:12:46.651 "trsvcid": "4421" 00:12:46.651 } 00:12:46.651 } 00:12:46.651 } 00:12:46.651 Got JSON-RPC error response 00:12:46.651 GoRPCClient: error on JSON-RPC call' 00:12:46.651 18:28:53 -- target/invalid.sh@70 -- # [[ 2024/07/14 18:28:53 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:46.651 request: 00:12:46.651 { 00:12:46.651 "method": "nvmf_subsystem_remove_listener", 00:12:46.651 "params": { 00:12:46.651 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:46.651 "listen_address": { 00:12:46.651 "trtype": "tcp", 00:12:46.651 "traddr": "", 00:12:46.651 "trsvcid": "4421" 00:12:46.651 } 00:12:46.651 } 00:12:46.651 } 00:12:46.651 Got JSON-RPC error response 00:12:46.651 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:46.651 18:28:53 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21871 -i 0 00:12:46.908 [2024-07-14 18:28:54.163340] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21871: invalid cntlid range [0-65519] 00:12:46.908 18:28:54 -- target/invalid.sh@73 -- # out='2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21871], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:46.908 request: 00:12:46.908 { 00:12:46.908 "method": "nvmf_create_subsystem", 00:12:46.908 "params": { 00:12:46.908 "nqn": "nqn.2016-06.io.spdk:cnode21871", 00:12:46.908 "min_cntlid": 0 00:12:46.908 } 00:12:46.908 } 00:12:46.908 Got JSON-RPC error response 00:12:46.908 GoRPCClient: error on JSON-RPC call' 00:12:46.908 18:28:54 -- target/invalid.sh@74 -- # [[ 2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21871], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:46.908 request: 00:12:46.908 { 00:12:46.908 "method": "nvmf_create_subsystem", 00:12:46.908 "params": { 00:12:46.908 "nqn": "nqn.2016-06.io.spdk:cnode21871", 00:12:46.908 "min_cntlid": 0 00:12:46.908 } 00:12:46.908 } 00:12:46.908 Got JSON-RPC error response 00:12:46.908 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:46.908 18:28:54 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16760 -i 65520 00:12:47.165 [2024-07-14 18:28:54.431767] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16760: invalid cntlid range [65520-65519] 00:12:47.165 18:28:54 -- target/invalid.sh@75 -- # out='2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16760], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:47.165 request: 00:12:47.165 { 00:12:47.165 "method": "nvmf_create_subsystem", 00:12:47.165 "params": { 00:12:47.165 "nqn": "nqn.2016-06.io.spdk:cnode16760", 00:12:47.165 "min_cntlid": 65520 00:12:47.165 } 00:12:47.165 } 00:12:47.165 Got JSON-RPC error response 00:12:47.165 GoRPCClient: error on JSON-RPC call' 00:12:47.165 18:28:54 -- target/invalid.sh@76 -- # [[ 2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16760], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:47.165 request: 00:12:47.165 { 00:12:47.165 "method": "nvmf_create_subsystem", 00:12:47.165 "params": { 00:12:47.165 "nqn": "nqn.2016-06.io.spdk:cnode16760", 00:12:47.165 "min_cntlid": 65520 00:12:47.165 } 00:12:47.165 } 00:12:47.165 Got JSON-RPC error response 00:12:47.165 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.165 18:28:54 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17149 -I 0 00:12:47.423 [2024-07-14 18:28:54.652187] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17149: invalid cntlid range [1-0] 00:12:47.423 18:28:54 -- target/invalid.sh@77 -- # out='2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17149], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:47.423 request: 00:12:47.423 { 00:12:47.423 "method": "nvmf_create_subsystem", 00:12:47.423 "params": { 00:12:47.423 "nqn": "nqn.2016-06.io.spdk:cnode17149", 00:12:47.423 "max_cntlid": 0 00:12:47.423 } 00:12:47.423 } 00:12:47.423 Got JSON-RPC error response 00:12:47.423 GoRPCClient: error on JSON-RPC call' 00:12:47.423 18:28:54 -- target/invalid.sh@78 -- # [[ 2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17149], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:47.423 request: 00:12:47.423 { 00:12:47.423 "method": "nvmf_create_subsystem", 00:12:47.423 "params": { 00:12:47.423 "nqn": "nqn.2016-06.io.spdk:cnode17149", 00:12:47.423 "max_cntlid": 0 00:12:47.423 } 00:12:47.423 } 00:12:47.423 Got JSON-RPC error response 00:12:47.423 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.423 18:28:54 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12407 -I 65520 00:12:47.680 [2024-07-14 18:28:54.872627] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12407: invalid cntlid range [1-65520] 00:12:47.680 18:28:54 -- target/invalid.sh@79 -- # out='2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12407], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:47.680 request: 00:12:47.680 { 00:12:47.680 "method": "nvmf_create_subsystem", 00:12:47.680 "params": { 00:12:47.680 "nqn": "nqn.2016-06.io.spdk:cnode12407", 00:12:47.680 "max_cntlid": 65520 00:12:47.680 } 00:12:47.680 } 00:12:47.680 Got JSON-RPC error response 00:12:47.680 GoRPCClient: error on JSON-RPC call' 00:12:47.680 18:28:54 -- target/invalid.sh@80 -- # [[ 2024/07/14 18:28:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12407], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:47.680 request: 00:12:47.680 { 00:12:47.680 "method": "nvmf_create_subsystem", 00:12:47.680 "params": { 00:12:47.680 "nqn": "nqn.2016-06.io.spdk:cnode12407", 00:12:47.680 "max_cntlid": 65520 00:12:47.680 } 00:12:47.680 } 00:12:47.680 Got JSON-RPC error response 00:12:47.680 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.680 18:28:54 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31092 -i 6 -I 5 00:12:47.681 [2024-07-14 18:28:55.084999] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31092: invalid cntlid range [6-5] 00:12:47.939 18:28:55 -- target/invalid.sh@83 -- # out='2024/07/14 18:28:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31092], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:47.939 request: 00:12:47.939 { 00:12:47.939 "method": "nvmf_create_subsystem", 00:12:47.939 "params": { 00:12:47.939 "nqn": "nqn.2016-06.io.spdk:cnode31092", 00:12:47.939 "min_cntlid": 6, 00:12:47.939 "max_cntlid": 5 00:12:47.939 } 00:12:47.939 } 00:12:47.939 Got JSON-RPC error response 00:12:47.939 GoRPCClient: error on JSON-RPC call' 00:12:47.939 18:28:55 -- target/invalid.sh@84 -- # [[ 2024/07/14 18:28:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31092], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:47.939 request: 00:12:47.939 { 00:12:47.939 "method": "nvmf_create_subsystem", 00:12:47.939 "params": { 00:12:47.939 "nqn": "nqn.2016-06.io.spdk:cnode31092", 00:12:47.939 "min_cntlid": 6, 00:12:47.939 "max_cntlid": 5 00:12:47.939 } 00:12:47.939 } 00:12:47.939 Got JSON-RPC error response 00:12:47.939 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.939 18:28:55 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:47.939 18:28:55 -- target/invalid.sh@87 -- # out='request: 00:12:47.939 { 00:12:47.939 "name": "foobar", 00:12:47.939 "method": "nvmf_delete_target", 00:12:47.939 "req_id": 1 00:12:47.939 } 00:12:47.939 Got JSON-RPC error response 00:12:47.939 response: 00:12:47.939 { 00:12:47.939 "code": -32602, 00:12:47.939 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:47.939 }' 00:12:47.939 18:28:55 -- target/invalid.sh@88 -- # [[ request: 00:12:47.939 { 00:12:47.939 "name": "foobar", 00:12:47.939 "method": "nvmf_delete_target", 00:12:47.939 "req_id": 1 00:12:47.939 } 00:12:47.939 Got JSON-RPC error response 00:12:47.939 response: 00:12:47.939 { 00:12:47.939 "code": -32602, 00:12:47.939 "message": "The specified target doesn't exist, cannot delete it." 00:12:47.939 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:47.939 18:28:55 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:47.939 18:28:55 -- target/invalid.sh@91 -- # nvmftestfini 00:12:47.939 18:28:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:47.939 18:28:55 -- nvmf/common.sh@116 -- # sync 00:12:47.939 18:28:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:47.939 18:28:55 -- nvmf/common.sh@119 -- # set +e 00:12:47.939 18:28:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:47.939 18:28:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:47.939 rmmod nvme_tcp 00:12:47.939 rmmod nvme_fabrics 00:12:47.939 rmmod nvme_keyring 00:12:47.939 18:28:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:47.939 18:28:55 -- nvmf/common.sh@123 -- # set -e 00:12:47.939 18:28:55 -- nvmf/common.sh@124 -- # return 0 00:12:47.939 18:28:55 -- nvmf/common.sh@477 -- # '[' -n 78274 ']' 00:12:47.939 18:28:55 -- nvmf/common.sh@478 -- # killprocess 78274 00:12:47.939 18:28:55 -- common/autotest_common.sh@926 -- # '[' -z 78274 ']' 00:12:47.939 18:28:55 -- common/autotest_common.sh@930 -- # kill -0 78274 00:12:47.939 18:28:55 -- common/autotest_common.sh@931 -- # uname 00:12:47.939 18:28:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:47.939 18:28:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78274 00:12:48.198 killing process with pid 78274 00:12:48.198 18:28:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:48.198 18:28:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:48.198 18:28:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78274' 00:12:48.198 18:28:55 -- common/autotest_common.sh@945 -- # kill 78274 00:12:48.198 18:28:55 -- common/autotest_common.sh@950 -- # wait 78274 00:12:48.198 18:28:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:48.198 18:28:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:48.198 18:28:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:48.198 18:28:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.198 18:28:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:48.198 18:28:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.198 18:28:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.198 18:28:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.198 18:28:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:48.198 ************************************ 00:12:48.198 END TEST nvmf_invalid 00:12:48.198 ************************************ 00:12:48.198 00:12:48.198 real 0m5.576s 00:12:48.198 user 0m22.323s 00:12:48.198 sys 0m1.274s 00:12:48.198 18:28:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.198 18:28:55 -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 18:28:55 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:48.457 18:28:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:48.457 18:28:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.457 18:28:55 -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 ************************************ 00:12:48.457 START TEST nvmf_abort 00:12:48.457 ************************************ 00:12:48.457 18:28:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:48.457 * Looking for test storage... 00:12:48.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.457 18:28:55 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.457 18:28:55 -- nvmf/common.sh@7 -- # uname -s 00:12:48.457 18:28:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.457 18:28:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.458 18:28:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.458 18:28:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.458 18:28:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.458 18:28:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.458 18:28:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.458 18:28:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.458 18:28:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.458 18:28:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.458 18:28:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:48.458 18:28:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:12:48.458 18:28:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.458 18:28:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.458 18:28:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.458 18:28:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.458 18:28:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.458 18:28:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.458 18:28:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.458 18:28:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.458 18:28:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.458 18:28:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.458 18:28:55 -- paths/export.sh@5 -- # export PATH 00:12:48.458 18:28:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.458 18:28:55 -- nvmf/common.sh@46 -- # : 0 00:12:48.458 18:28:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.458 18:28:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.458 18:28:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.458 18:28:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.458 18:28:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.458 18:28:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.458 18:28:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.458 18:28:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.458 18:28:55 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.458 18:28:55 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:48.458 18:28:55 -- target/abort.sh@14 -- # nvmftestinit 00:12:48.458 18:28:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.458 18:28:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.458 18:28:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.458 18:28:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.458 18:28:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.458 18:28:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.458 18:28:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.458 18:28:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.458 18:28:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:48.458 18:28:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:48.458 18:28:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:48.458 18:28:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:48.458 18:28:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:48.458 18:28:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:48.458 18:28:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.458 18:28:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.458 18:28:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.458 18:28:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:48.458 18:28:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.458 18:28:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.458 18:28:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.458 18:28:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.458 18:28:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.458 18:28:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.458 18:28:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.458 18:28:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.458 18:28:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:48.458 18:28:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:48.458 Cannot find device "nvmf_tgt_br" 00:12:48.458 18:28:55 -- nvmf/common.sh@154 -- # true 00:12:48.458 18:28:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.458 Cannot find device "nvmf_tgt_br2" 00:12:48.458 18:28:55 -- nvmf/common.sh@155 -- # true 00:12:48.458 18:28:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:48.458 18:28:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:48.458 Cannot find device "nvmf_tgt_br" 00:12:48.458 18:28:55 -- nvmf/common.sh@157 -- # true 00:12:48.458 18:28:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:48.458 Cannot find device "nvmf_tgt_br2" 00:12:48.458 18:28:55 -- nvmf/common.sh@158 -- # true 00:12:48.458 18:28:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:48.458 18:28:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:48.717 18:28:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.717 18:28:55 -- nvmf/common.sh@161 -- # true 00:12:48.717 18:28:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.717 18:28:55 -- nvmf/common.sh@162 -- # true 00:12:48.717 18:28:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.717 18:28:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.717 18:28:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.717 18:28:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.717 18:28:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.717 18:28:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.717 18:28:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.717 18:28:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.717 18:28:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.717 18:28:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:48.717 18:28:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:48.717 18:28:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:48.717 18:28:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:48.717 18:28:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.717 18:28:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.717 18:28:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.717 18:28:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:48.717 18:28:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:48.717 18:28:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.717 18:28:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.717 18:28:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.717 18:28:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.717 18:28:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.717 18:28:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:48.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:12:48.717 00:12:48.717 --- 10.0.0.2 ping statistics --- 00:12:48.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.717 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:48.717 18:28:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:48.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:48.717 00:12:48.717 --- 10.0.0.3 ping statistics --- 00:12:48.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.717 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:48.717 18:28:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:12:48.717 00:12:48.717 --- 10.0.0.1 ping statistics --- 00:12:48.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.717 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:12:48.717 18:28:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.717 18:28:56 -- nvmf/common.sh@421 -- # return 0 00:12:48.717 18:28:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:48.717 18:28:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.717 18:28:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:48.717 18:28:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:48.717 18:28:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.717 18:28:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:48.717 18:28:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:48.717 18:28:56 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:48.717 18:28:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:48.717 18:28:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:48.717 18:28:56 -- common/autotest_common.sh@10 -- # set +x 00:12:48.717 18:28:56 -- nvmf/common.sh@469 -- # nvmfpid=78777 00:12:48.718 18:28:56 -- nvmf/common.sh@470 -- # waitforlisten 78777 00:12:48.718 18:28:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:48.718 18:28:56 -- common/autotest_common.sh@819 -- # '[' -z 78777 ']' 00:12:48.718 18:28:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.718 18:28:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:48.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.718 18:28:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.718 18:28:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:48.718 18:28:56 -- common/autotest_common.sh@10 -- # set +x 00:12:48.980 [2024-07-14 18:28:56.174150] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:48.980 [2024-07-14 18:28:56.174231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.980 [2024-07-14 18:28:56.315752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.980 [2024-07-14 18:28:56.375469] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.980 [2024-07-14 18:28:56.375977] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.980 [2024-07-14 18:28:56.376035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.980 [2024-07-14 18:28:56.376191] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.980 [2024-07-14 18:28:56.376515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.980 [2024-07-14 18:28:56.376691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.980 [2024-07-14 18:28:56.376692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.918 18:28:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:49.918 18:28:57 -- common/autotest_common.sh@852 -- # return 0 00:12:49.918 18:28:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.918 18:28:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:49.918 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 18:28:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.919 18:28:57 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 [2024-07-14 18:28:57.211178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 Malloc0 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 Delay0 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 [2024-07-14 18:28:57.290598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:49.919 18:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.919 18:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 18:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.919 18:28:57 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:50.176 [2024-07-14 18:28:57.467214] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:52.076 Initializing NVMe Controllers 00:12:52.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:52.076 controller IO queue size 128 less than required 00:12:52.076 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:52.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:52.076 Initialization complete. Launching workers. 00:12:52.076 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34560 00:12:52.076 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34621, failed to submit 62 00:12:52.076 success 34560, unsuccess 61, failed 0 00:12:52.336 18:28:59 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:52.336 18:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.336 18:28:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.336 18:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.336 18:28:59 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:52.336 18:28:59 -- target/abort.sh@38 -- # nvmftestfini 00:12:52.336 18:28:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:52.336 18:28:59 -- nvmf/common.sh@116 -- # sync 00:12:52.336 18:28:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:52.336 18:28:59 -- nvmf/common.sh@119 -- # set +e 00:12:52.336 18:28:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:52.336 18:28:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:52.336 rmmod nvme_tcp 00:12:52.336 rmmod nvme_fabrics 00:12:52.336 rmmod nvme_keyring 00:12:52.336 18:28:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:52.336 18:28:59 -- nvmf/common.sh@123 -- # set -e 00:12:52.336 18:28:59 -- nvmf/common.sh@124 -- # return 0 00:12:52.336 18:28:59 -- nvmf/common.sh@477 -- # '[' -n 78777 ']' 00:12:52.336 18:28:59 -- nvmf/common.sh@478 -- # killprocess 78777 00:12:52.336 18:28:59 -- common/autotest_common.sh@926 -- # '[' -z 78777 ']' 00:12:52.336 18:28:59 -- common/autotest_common.sh@930 -- # kill -0 78777 00:12:52.336 18:28:59 -- common/autotest_common.sh@931 -- # uname 00:12:52.336 18:28:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:52.336 18:28:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78777 00:12:52.336 killing process with pid 78777 00:12:52.336 18:28:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:52.336 18:28:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:52.336 18:28:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78777' 00:12:52.336 18:28:59 -- common/autotest_common.sh@945 -- # kill 78777 00:12:52.336 18:28:59 -- common/autotest_common.sh@950 -- # wait 78777 00:12:52.595 18:28:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.595 18:28:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.595 18:28:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.595 18:28:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.595 18:28:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.595 18:28:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.595 18:28:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.595 18:28:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.595 18:28:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:52.595 00:12:52.595 real 0m4.250s 00:12:52.595 user 0m12.364s 00:12:52.595 sys 0m0.986s 00:12:52.595 18:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.595 18:28:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.595 ************************************ 00:12:52.595 END TEST nvmf_abort 00:12:52.595 ************************************ 00:12:52.595 18:28:59 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.595 18:28:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:52.595 18:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.595 18:28:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.595 ************************************ 00:12:52.595 START TEST nvmf_ns_hotplug_stress 00:12:52.595 ************************************ 00:12:52.595 18:28:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.854 * Looking for test storage... 00:12:52.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.854 18:29:00 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.854 18:29:00 -- nvmf/common.sh@7 -- # uname -s 00:12:52.854 18:29:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.854 18:29:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.854 18:29:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.854 18:29:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.854 18:29:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.854 18:29:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.854 18:29:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.854 18:29:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.854 18:29:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.854 18:29:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:12:52.854 18:29:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:12:52.854 18:29:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.854 18:29:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.854 18:29:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.854 18:29:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.854 18:29:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.854 18:29:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.854 18:29:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.854 18:29:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.854 18:29:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.854 18:29:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.854 18:29:00 -- paths/export.sh@5 -- # export PATH 00:12:52.854 18:29:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.854 18:29:00 -- nvmf/common.sh@46 -- # : 0 00:12:52.854 18:29:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.854 18:29:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.854 18:29:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.854 18:29:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.854 18:29:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.854 18:29:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.854 18:29:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.854 18:29:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.854 18:29:00 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.854 18:29:00 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:52.854 18:29:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.854 18:29:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.854 18:29:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.854 18:29:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.854 18:29:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.854 18:29:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.854 18:29:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.854 18:29:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.854 18:29:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:52.854 18:29:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:52.854 18:29:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.854 18:29:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.854 18:29:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.854 18:29:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:52.854 18:29:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.854 18:29:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.854 18:29:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.854 18:29:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.854 18:29:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.854 18:29:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.854 18:29:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.854 18:29:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.854 18:29:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:52.854 18:29:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:52.854 Cannot find device "nvmf_tgt_br" 00:12:52.854 18:29:00 -- nvmf/common.sh@154 -- # true 00:12:52.854 18:29:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.854 Cannot find device "nvmf_tgt_br2" 00:12:52.854 18:29:00 -- nvmf/common.sh@155 -- # true 00:12:52.854 18:29:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:52.854 18:29:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:52.854 Cannot find device "nvmf_tgt_br" 00:12:52.854 18:29:00 -- nvmf/common.sh@157 -- # true 00:12:52.855 18:29:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:52.855 Cannot find device "nvmf_tgt_br2" 00:12:52.855 18:29:00 -- nvmf/common.sh@158 -- # true 00:12:52.855 18:29:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:52.855 18:29:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:52.855 18:29:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.855 18:29:00 -- nvmf/common.sh@161 -- # true 00:12:52.855 18:29:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.855 18:29:00 -- nvmf/common.sh@162 -- # true 00:12:52.855 18:29:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.855 18:29:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.855 18:29:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.855 18:29:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.855 18:29:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.855 18:29:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.855 18:29:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.855 18:29:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.855 18:29:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.855 18:29:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:52.855 18:29:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:53.114 18:29:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:53.114 18:29:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:53.114 18:29:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.114 18:29:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.114 18:29:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.114 18:29:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:53.114 18:29:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:53.114 18:29:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.114 18:29:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.114 18:29:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.114 18:29:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.114 18:29:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.114 18:29:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:53.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:12:53.114 00:12:53.114 --- 10.0.0.2 ping statistics --- 00:12:53.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.114 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:53.114 18:29:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:53.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:53.114 00:12:53.114 --- 10.0.0.3 ping statistics --- 00:12:53.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.114 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:53.114 18:29:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:53.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:53.114 00:12:53.114 --- 10.0.0.1 ping statistics --- 00:12:53.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.114 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:53.114 18:29:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.114 18:29:00 -- nvmf/common.sh@421 -- # return 0 00:12:53.114 18:29:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:53.114 18:29:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.114 18:29:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:53.114 18:29:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:53.114 18:29:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.114 18:29:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:53.114 18:29:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:53.114 18:29:00 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:53.114 18:29:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.114 18:29:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:53.114 18:29:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.114 18:29:00 -- nvmf/common.sh@469 -- # nvmfpid=79039 00:12:53.114 18:29:00 -- nvmf/common.sh@470 -- # waitforlisten 79039 00:12:53.114 18:29:00 -- common/autotest_common.sh@819 -- # '[' -z 79039 ']' 00:12:53.114 18:29:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.114 18:29:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:53.114 18:29:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:53.114 18:29:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.114 18:29:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:53.114 18:29:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 [2024-07-14 18:29:00.482312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:53.114 [2024-07-14 18:29:00.482621] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.373 [2024-07-14 18:29:00.622771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.373 [2024-07-14 18:29:00.686097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.373 [2024-07-14 18:29:00.686489] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.373 [2024-07-14 18:29:00.686634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.373 [2024-07-14 18:29:00.686801] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.373 [2024-07-14 18:29:00.687077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.373 [2024-07-14 18:29:00.687156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.373 [2024-07-14 18:29:00.687160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.308 18:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.308 18:29:01 -- common/autotest_common.sh@852 -- # return 0 00:12:54.308 18:29:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:54.308 18:29:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:54.308 18:29:01 -- common/autotest_common.sh@10 -- # set +x 00:12:54.308 18:29:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.308 18:29:01 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:54.308 18:29:01 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.566 [2024-07-14 18:29:01.773784] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.567 18:29:01 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.825 18:29:02 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.083 [2024-07-14 18:29:02.302865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.083 18:29:02 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:55.341 18:29:02 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:55.599 Malloc0 00:12:55.599 18:29:02 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.858 Delay0 00:12:55.858 18:29:03 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.116 18:29:03 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:56.116 NULL1 00:12:56.373 18:29:03 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:56.373 18:29:03 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79170 00:12:56.373 18:29:03 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:56.373 18:29:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:12:56.373 18:29:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.747 Read completed with error (sct=0, sc=11) 00:12:57.747 18:29:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.005 18:29:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:58.005 18:29:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:58.263 true 00:12:58.263 18:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:12:58.263 18:29:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.196 18:29:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.196 18:29:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:59.196 18:29:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:59.454 true 00:12:59.454 18:29:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:12:59.454 18:29:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.712 18:29:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.970 18:29:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:59.970 18:29:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:00.229 true 00:13:00.229 18:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:00.229 18:29:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.163 18:29:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.163 18:29:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:01.163 18:29:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:01.420 true 00:13:01.420 18:29:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:01.420 18:29:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.689 18:29:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.965 18:29:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:01.965 18:29:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:02.223 true 00:13:02.223 18:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:02.223 18:29:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.155 18:29:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.155 18:29:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:03.155 18:29:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:03.413 true 00:13:03.413 18:29:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:03.413 18:29:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.671 18:29:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.930 18:29:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:03.930 18:29:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:04.188 true 00:13:04.188 18:29:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:04.188 18:29:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.122 18:29:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.380 18:29:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:05.380 18:29:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:05.380 true 00:13:05.639 18:29:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:05.639 18:29:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.639 18:29:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.897 18:29:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:05.897 18:29:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:06.154 true 00:13:06.154 18:29:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:06.154 18:29:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.086 18:29:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.343 18:29:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:07.343 18:29:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:07.601 true 00:13:07.601 18:29:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:07.601 18:29:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.858 18:29:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.115 18:29:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:08.115 18:29:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:08.373 true 00:13:08.373 18:29:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:08.373 18:29:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.306 18:29:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.306 18:29:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:09.306 18:29:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:09.563 true 00:13:09.563 18:29:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:09.563 18:29:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.820 18:29:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.096 18:29:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:10.096 18:29:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:10.096 true 00:13:10.096 18:29:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:10.096 18:29:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.030 18:29:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.289 18:29:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:11.289 18:29:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:11.546 true 00:13:11.546 18:29:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:11.546 18:29:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.804 18:29:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.062 18:29:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:12.062 18:29:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:12.320 true 00:13:12.320 18:29:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:12.320 18:29:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.254 18:29:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.512 18:29:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:13.512 18:29:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:13.512 true 00:13:13.770 18:29:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:13.770 18:29:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.770 18:29:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.028 18:29:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:14.028 18:29:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:14.286 true 00:13:14.286 18:29:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:14.286 18:29:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.220 18:29:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.478 18:29:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:15.478 18:29:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:15.478 true 00:13:15.478 18:29:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:15.478 18:29:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.736 18:29:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.993 18:29:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:15.993 18:29:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:16.249 true 00:13:16.249 18:29:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:16.249 18:29:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.182 18:29:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.440 18:29:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:17.440 18:29:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:17.440 true 00:13:17.698 18:29:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:17.698 18:29:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.698 18:29:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.956 18:29:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:17.956 18:29:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:18.215 true 00:13:18.215 18:29:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:18.215 18:29:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.150 18:29:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.408 18:29:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:19.408 18:29:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:19.666 true 00:13:19.666 18:29:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:19.666 18:29:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.924 18:29:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.182 18:29:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:20.182 18:29:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:20.439 true 00:13:20.439 18:29:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:20.439 18:29:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.372 18:29:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.372 18:29:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:21.372 18:29:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:21.630 true 00:13:21.630 18:29:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:21.630 18:29:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.887 18:29:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.170 18:29:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:22.170 18:29:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:22.170 true 00:13:22.170 18:29:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:22.170 18:29:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.104 18:29:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.361 18:29:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:23.361 18:29:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:23.619 true 00:13:23.619 18:29:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:23.619 18:29:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.876 18:29:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.133 18:29:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:24.133 18:29:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:24.390 true 00:13:24.390 18:29:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:24.390 18:29:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.649 18:29:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.907 18:29:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:24.907 18:29:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:25.165 true 00:13:25.165 18:29:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:25.165 18:29:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.152 18:29:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.425 18:29:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:26.425 18:29:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:26.683 true 00:13:26.683 18:29:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:26.683 18:29:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.616 Initializing NVMe Controllers 00:13:27.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.616 Controller IO queue size 128, less than required. 00:13:27.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.616 Controller IO queue size 128, less than required. 00:13:27.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:27.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:27.616 Initialization complete. Launching workers. 00:13:27.616 ======================================================== 00:13:27.616 Latency(us) 00:13:27.616 Device Information : IOPS MiB/s Average min max 00:13:27.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 467.93 0.23 159427.85 4483.32 1023650.18 00:13:27.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11926.23 5.82 10732.55 3055.01 517903.83 00:13:27.616 ======================================================== 00:13:27.616 Total : 12394.17 6.05 16346.44 3055.01 1023650.18 00:13:27.616 00:13:27.616 18:29:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.873 18:29:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:27.873 18:29:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:28.130 true 00:13:28.130 18:29:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79170 00:13:28.130 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79170) - No such process 00:13:28.130 18:29:35 -- target/ns_hotplug_stress.sh@53 -- # wait 79170 00:13:28.130 18:29:35 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.389 18:29:35 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:28.647 null0 00:13:28.647 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.647 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.647 18:29:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:28.904 null1 00:13:28.904 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.904 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.904 18:29:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:29.162 null2 00:13:29.162 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.162 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.162 18:29:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:29.420 null3 00:13:29.420 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.420 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.420 18:29:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:29.678 null4 00:13:29.678 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.678 18:29:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.678 18:29:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:29.678 null5 00:13:29.935 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.935 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.935 18:29:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:30.192 null6 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:30.192 null7 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.192 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.193 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@66 -- # wait 80228 80229 80232 80235 80236 80238 80240 80242 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:30.450 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:30.707 18:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:30.707 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.707 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.707 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.963 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.964 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.221 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.478 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.736 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.736 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.736 18:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.736 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.018 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.276 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.534 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.791 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.791 18:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.791 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.792 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.049 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.307 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.564 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.565 18:29:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.822 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.081 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.339 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.598 18:29:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.857 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.126 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.126 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.127 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.399 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.658 18:29:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.658 18:29:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.658 18:29:43 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:35.658 18:29:43 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:35.658 18:29:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:35.658 18:29:43 -- nvmf/common.sh@116 -- # sync 00:13:35.658 18:29:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:35.658 18:29:43 -- nvmf/common.sh@119 -- # set +e 00:13:35.658 18:29:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:35.658 18:29:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:35.658 rmmod nvme_tcp 00:13:35.916 rmmod nvme_fabrics 00:13:35.916 rmmod nvme_keyring 00:13:35.916 18:29:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:35.916 18:29:43 -- nvmf/common.sh@123 -- # set -e 00:13:35.916 18:29:43 -- nvmf/common.sh@124 -- # return 0 00:13:35.916 18:29:43 -- nvmf/common.sh@477 -- # '[' -n 79039 ']' 00:13:35.916 18:29:43 -- nvmf/common.sh@478 -- # killprocess 79039 00:13:35.916 18:29:43 -- common/autotest_common.sh@926 -- # '[' -z 79039 ']' 00:13:35.916 18:29:43 -- common/autotest_common.sh@930 -- # kill -0 79039 00:13:35.916 18:29:43 -- common/autotest_common.sh@931 -- # uname 00:13:35.916 18:29:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.916 18:29:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79039 00:13:35.916 killing process with pid 79039 00:13:35.916 18:29:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:35.916 18:29:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:35.916 18:29:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79039' 00:13:35.916 18:29:43 -- common/autotest_common.sh@945 -- # kill 79039 00:13:35.916 18:29:43 -- common/autotest_common.sh@950 -- # wait 79039 00:13:36.175 18:29:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.175 18:29:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.175 18:29:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.175 18:29:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.175 18:29:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.175 18:29:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.175 18:29:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.175 18:29:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.175 18:29:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:36.175 00:13:36.175 real 0m43.511s 00:13:36.175 user 3m25.704s 00:13:36.175 sys 0m12.990s 00:13:36.175 18:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.175 18:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:36.175 ************************************ 00:13:36.175 END TEST nvmf_ns_hotplug_stress 00:13:36.175 ************************************ 00:13:36.175 18:29:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.175 18:29:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:36.175 18:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.175 18:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:36.175 ************************************ 00:13:36.175 START TEST nvmf_connect_stress 00:13:36.175 ************************************ 00:13:36.175 18:29:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.175 * Looking for test storage... 00:13:36.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.434 18:29:43 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.434 18:29:43 -- nvmf/common.sh@7 -- # uname -s 00:13:36.434 18:29:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.434 18:29:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.434 18:29:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.434 18:29:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.434 18:29:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.434 18:29:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.434 18:29:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.434 18:29:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.434 18:29:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.434 18:29:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.434 18:29:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:13:36.434 18:29:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:13:36.434 18:29:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.434 18:29:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.434 18:29:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:36.434 18:29:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.434 18:29:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.434 18:29:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.434 18:29:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.434 18:29:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.434 18:29:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.434 18:29:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.434 18:29:43 -- paths/export.sh@5 -- # export PATH 00:13:36.434 18:29:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.434 18:29:43 -- nvmf/common.sh@46 -- # : 0 00:13:36.434 18:29:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:36.434 18:29:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:36.434 18:29:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:36.434 18:29:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.434 18:29:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.434 18:29:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:36.435 18:29:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:36.435 18:29:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:36.435 18:29:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:36.435 18:29:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:36.435 18:29:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.435 18:29:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:36.435 18:29:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:36.435 18:29:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:36.435 18:29:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.435 18:29:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.435 18:29:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.435 18:29:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:36.435 18:29:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:36.435 18:29:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:36.435 18:29:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:36.435 18:29:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:36.435 18:29:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:36.435 18:29:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.435 18:29:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.435 18:29:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:36.435 18:29:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:36.435 18:29:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:36.435 18:29:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:36.435 18:29:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:36.435 18:29:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.435 18:29:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:36.435 18:29:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:36.435 18:29:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:36.435 18:29:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:36.435 18:29:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:36.435 18:29:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:36.435 Cannot find device "nvmf_tgt_br" 00:13:36.435 18:29:43 -- nvmf/common.sh@154 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.435 Cannot find device "nvmf_tgt_br2" 00:13:36.435 18:29:43 -- nvmf/common.sh@155 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:36.435 18:29:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:36.435 Cannot find device "nvmf_tgt_br" 00:13:36.435 18:29:43 -- nvmf/common.sh@157 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:36.435 Cannot find device "nvmf_tgt_br2" 00:13:36.435 18:29:43 -- nvmf/common.sh@158 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:36.435 18:29:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:36.435 18:29:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.435 18:29:43 -- nvmf/common.sh@161 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.435 18:29:43 -- nvmf/common.sh@162 -- # true 00:13:36.435 18:29:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:36.435 18:29:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:36.435 18:29:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:36.435 18:29:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.435 18:29:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.435 18:29:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.435 18:29:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.435 18:29:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:36.435 18:29:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:36.694 18:29:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:36.694 18:29:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:36.694 18:29:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:36.694 18:29:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:36.694 18:29:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.694 18:29:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.694 18:29:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.694 18:29:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:36.694 18:29:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:36.694 18:29:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.694 18:29:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.694 18:29:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.694 18:29:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.694 18:29:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.694 18:29:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:36.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:36.694 00:13:36.694 --- 10.0.0.2 ping statistics --- 00:13:36.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.694 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:36.694 18:29:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:36.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:36.694 00:13:36.694 --- 10.0.0.3 ping statistics --- 00:13:36.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.694 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:36.694 18:29:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:13:36.694 00:13:36.694 --- 10.0.0.1 ping statistics --- 00:13:36.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.694 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:36.694 18:29:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.694 18:29:43 -- nvmf/common.sh@421 -- # return 0 00:13:36.694 18:29:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.694 18:29:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.694 18:29:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.694 18:29:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.694 18:29:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.694 18:29:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.694 18:29:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.694 18:29:43 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:36.694 18:29:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.694 18:29:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:36.694 18:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:36.694 18:29:43 -- nvmf/common.sh@469 -- # nvmfpid=81543 00:13:36.694 18:29:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.694 18:29:43 -- nvmf/common.sh@470 -- # waitforlisten 81543 00:13:36.694 18:29:43 -- common/autotest_common.sh@819 -- # '[' -z 81543 ']' 00:13:36.694 18:29:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.694 18:29:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.695 18:29:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.695 18:29:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:36.695 18:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:36.695 [2024-07-14 18:29:44.049442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:36.695 [2024-07-14 18:29:44.049559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.954 [2024-07-14 18:29:44.191051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.954 [2024-07-14 18:29:44.270037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.954 [2024-07-14 18:29:44.270241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.954 [2024-07-14 18:29:44.270267] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.954 [2024-07-14 18:29:44.270286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.954 [2024-07-14 18:29:44.270541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.954 [2024-07-14 18:29:44.271210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.954 [2024-07-14 18:29:44.271256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.892 18:29:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.892 18:29:45 -- common/autotest_common.sh@852 -- # return 0 00:13:37.892 18:29:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:37.892 18:29:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 18:29:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.892 18:29:45 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.892 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 [2024-07-14 18:29:45.085619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.892 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.892 18:29:45 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:37.892 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.892 18:29:45 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.892 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 [2024-07-14 18:29:45.105780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.892 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.892 18:29:45 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:37.892 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 NULL1 00:13:37.892 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.892 18:29:45 -- target/connect_stress.sh@21 -- # PERF_PID=81595 00:13:37.892 18:29:45 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:37.892 18:29:45 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:37.892 18:29:45 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.892 18:29:45 -- target/connect_stress.sh@28 -- # cat 00:13:37.892 18:29:45 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:37.892 18:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.892 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.892 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:38.151 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.151 18:29:45 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:38.151 18:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.151 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.151 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:38.718 18:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.718 18:29:45 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:38.718 18:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.718 18:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.718 18:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:38.976 18:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.976 18:29:46 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:38.976 18:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.976 18:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.976 18:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:39.235 18:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.235 18:29:46 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:39.235 18:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.235 18:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.235 18:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:39.494 18:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.494 18:29:46 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:39.494 18:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.494 18:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.494 18:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 18:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.752 18:29:47 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:39.752 18:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.752 18:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.752 18:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:40.319 18:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.319 18:29:47 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:40.319 18:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.319 18:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.319 18:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:40.577 18:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.577 18:29:47 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:40.577 18:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.577 18:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.577 18:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:40.836 18:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.836 18:29:48 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:40.836 18:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.836 18:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.836 18:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:41.095 18:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.095 18:29:48 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:41.095 18:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.095 18:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.095 18:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:41.354 18:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.354 18:29:48 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:41.354 18:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.354 18:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.354 18:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:41.946 18:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.946 18:29:49 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:41.946 18:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.946 18:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.946 18:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:42.204 18:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.204 18:29:49 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:42.204 18:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.204 18:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.204 18:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:42.462 18:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.462 18:29:49 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:42.462 18:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.462 18:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.462 18:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:42.720 18:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.720 18:29:50 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:42.720 18:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.720 18:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.721 18:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:42.978 18:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.978 18:29:50 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:42.978 18:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.978 18:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.978 18:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:43.545 18:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.545 18:29:50 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:43.545 18:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.545 18:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.545 18:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 18:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.803 18:29:51 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:43.803 18:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.803 18:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.803 18:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:44.062 18:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.062 18:29:51 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:44.062 18:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.062 18:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.062 18:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 18:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.320 18:29:51 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:44.320 18:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.320 18:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.320 18:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:44.583 18:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.583 18:29:51 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:44.583 18:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.583 18:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.583 18:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 18:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.149 18:29:52 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:45.149 18:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.149 18:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.149 18:29:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.407 18:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.407 18:29:52 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:45.407 18:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.407 18:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.407 18:29:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.665 18:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.665 18:29:52 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:45.665 18:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.665 18:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.665 18:29:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.928 18:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.928 18:29:53 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:45.928 18:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.928 18:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.928 18:29:53 -- common/autotest_common.sh@10 -- # set +x 00:13:46.207 18:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.207 18:29:53 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:46.207 18:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.207 18:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.207 18:29:53 -- common/autotest_common.sh@10 -- # set +x 00:13:46.773 18:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.773 18:29:53 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:46.773 18:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.773 18:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.773 18:29:53 -- common/autotest_common.sh@10 -- # set +x 00:13:47.031 18:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.031 18:29:54 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:47.031 18:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.031 18:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.031 18:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:47.290 18:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.290 18:29:54 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:47.290 18:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.290 18:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.290 18:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 18:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.549 18:29:54 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:47.549 18:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.549 18:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.549 18:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:47.808 18:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.808 18:29:55 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:47.808 18:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.808 18:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.808 18:29:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.067 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.326 18:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.326 18:29:55 -- target/connect_stress.sh@34 -- # kill -0 81595 00:13:48.326 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81595) - No such process 00:13:48.326 18:29:55 -- target/connect_stress.sh@38 -- # wait 81595 00:13:48.326 18:29:55 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:48.326 18:29:55 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:48.326 18:29:55 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:48.326 18:29:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:48.326 18:29:55 -- nvmf/common.sh@116 -- # sync 00:13:48.326 18:29:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:48.326 18:29:55 -- nvmf/common.sh@119 -- # set +e 00:13:48.326 18:29:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:48.326 18:29:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:48.326 rmmod nvme_tcp 00:13:48.326 rmmod nvme_fabrics 00:13:48.326 rmmod nvme_keyring 00:13:48.326 18:29:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:48.326 18:29:55 -- nvmf/common.sh@123 -- # set -e 00:13:48.326 18:29:55 -- nvmf/common.sh@124 -- # return 0 00:13:48.326 18:29:55 -- nvmf/common.sh@477 -- # '[' -n 81543 ']' 00:13:48.326 18:29:55 -- nvmf/common.sh@478 -- # killprocess 81543 00:13:48.326 18:29:55 -- common/autotest_common.sh@926 -- # '[' -z 81543 ']' 00:13:48.326 18:29:55 -- common/autotest_common.sh@930 -- # kill -0 81543 00:13:48.326 18:29:55 -- common/autotest_common.sh@931 -- # uname 00:13:48.326 18:29:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.326 18:29:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81543 00:13:48.326 killing process with pid 81543 00:13:48.326 18:29:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:48.326 18:29:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:48.326 18:29:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81543' 00:13:48.326 18:29:55 -- common/autotest_common.sh@945 -- # kill 81543 00:13:48.326 18:29:55 -- common/autotest_common.sh@950 -- # wait 81543 00:13:48.585 18:29:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:48.585 18:29:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:48.585 18:29:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:48.585 18:29:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.585 18:29:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:48.585 18:29:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.585 18:29:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.585 18:29:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.585 18:29:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:48.585 00:13:48.585 real 0m12.405s 00:13:48.585 user 0m41.429s 00:13:48.585 sys 0m3.211s 00:13:48.585 18:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.585 ************************************ 00:13:48.585 END TEST nvmf_connect_stress 00:13:48.585 ************************************ 00:13:48.585 18:29:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.585 18:29:55 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:48.585 18:29:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.585 18:29:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.585 18:29:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.585 ************************************ 00:13:48.585 START TEST nvmf_fused_ordering 00:13:48.585 ************************************ 00:13:48.585 18:29:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:48.843 * Looking for test storage... 00:13:48.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.843 18:29:56 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.843 18:29:56 -- nvmf/common.sh@7 -- # uname -s 00:13:48.843 18:29:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.843 18:29:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.843 18:29:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.843 18:29:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.843 18:29:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.843 18:29:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.843 18:29:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.843 18:29:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.843 18:29:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.843 18:29:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:13:48.843 18:29:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:13:48.843 18:29:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.843 18:29:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.843 18:29:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.843 18:29:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.843 18:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.843 18:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.843 18:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.843 18:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.843 18:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.843 18:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.843 18:29:56 -- paths/export.sh@5 -- # export PATH 00:13:48.843 18:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.843 18:29:56 -- nvmf/common.sh@46 -- # : 0 00:13:48.843 18:29:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.843 18:29:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.843 18:29:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.843 18:29:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.843 18:29:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.843 18:29:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.843 18:29:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.843 18:29:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.843 18:29:56 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:48.843 18:29:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.843 18:29:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.843 18:29:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.843 18:29:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.843 18:29:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.843 18:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.843 18:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.843 18:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.843 18:29:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:48.843 18:29:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:48.843 18:29:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.843 18:29:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.844 18:29:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.844 18:29:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:48.844 18:29:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.844 18:29:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.844 18:29:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.844 18:29:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.844 18:29:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.844 18:29:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.844 18:29:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.844 18:29:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.844 18:29:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:48.844 18:29:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:48.844 Cannot find device "nvmf_tgt_br" 00:13:48.844 18:29:56 -- nvmf/common.sh@154 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.844 Cannot find device "nvmf_tgt_br2" 00:13:48.844 18:29:56 -- nvmf/common.sh@155 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:48.844 18:29:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:48.844 Cannot find device "nvmf_tgt_br" 00:13:48.844 18:29:56 -- nvmf/common.sh@157 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:48.844 Cannot find device "nvmf_tgt_br2" 00:13:48.844 18:29:56 -- nvmf/common.sh@158 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:48.844 18:29:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:48.844 18:29:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.844 18:29:56 -- nvmf/common.sh@161 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.844 18:29:56 -- nvmf/common.sh@162 -- # true 00:13:48.844 18:29:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.844 18:29:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.844 18:29:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.844 18:29:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.844 18:29:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.844 18:29:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.102 18:29:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.102 18:29:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.102 18:29:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.102 18:29:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:49.102 18:29:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:49.102 18:29:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:49.102 18:29:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:49.102 18:29:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.102 18:29:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.102 18:29:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.102 18:29:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:49.102 18:29:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:49.102 18:29:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.102 18:29:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.102 18:29:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.102 18:29:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.102 18:29:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.102 18:29:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:49.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:13:49.102 00:13:49.102 --- 10.0.0.2 ping statistics --- 00:13:49.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.102 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:49.102 18:29:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:49.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:49.102 00:13:49.102 --- 10.0.0.3 ping statistics --- 00:13:49.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.102 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:49.102 18:29:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:49.102 00:13:49.102 --- 10.0.0.1 ping statistics --- 00:13:49.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.102 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:49.102 18:29:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.102 18:29:56 -- nvmf/common.sh@421 -- # return 0 00:13:49.102 18:29:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:49.102 18:29:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.102 18:29:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:49.102 18:29:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:49.102 18:29:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.102 18:29:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:49.102 18:29:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:49.102 18:29:56 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:49.102 18:29:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:49.102 18:29:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:49.102 18:29:56 -- common/autotest_common.sh@10 -- # set +x 00:13:49.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.102 18:29:56 -- nvmf/common.sh@469 -- # nvmfpid=81915 00:13:49.102 18:29:56 -- nvmf/common.sh@470 -- # waitforlisten 81915 00:13:49.102 18:29:56 -- common/autotest_common.sh@819 -- # '[' -z 81915 ']' 00:13:49.102 18:29:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.102 18:29:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:49.102 18:29:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.102 18:29:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.102 18:29:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:49.102 18:29:56 -- common/autotest_common.sh@10 -- # set +x 00:13:49.102 [2024-07-14 18:29:56.462184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:49.102 [2024-07-14 18:29:56.462255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.361 [2024-07-14 18:29:56.596989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.361 [2024-07-14 18:29:56.658648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:49.361 [2024-07-14 18:29:56.659083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.361 [2024-07-14 18:29:56.659205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.361 [2024-07-14 18:29:56.659335] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.361 [2024-07-14 18:29:56.659518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.295 18:29:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.295 18:29:57 -- common/autotest_common.sh@852 -- # return 0 00:13:50.295 18:29:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:50.295 18:29:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 18:29:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.295 18:29:57 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 [2024-07-14 18:29:57.516614] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 [2024-07-14 18:29:57.536660] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 NULL1 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:50.295 18:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.295 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:50.295 18:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.295 18:29:57 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:50.295 [2024-07-14 18:29:57.589658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:50.295 [2024-07-14 18:29:57.589699] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81967 ] 00:13:50.861 Attached to nqn.2016-06.io.spdk:cnode1 00:13:50.861 Namespace ID: 1 size: 1GB 00:13:50.861 fused_ordering(0) 00:13:50.861 fused_ordering(1) 00:13:50.861 fused_ordering(2) 00:13:50.861 fused_ordering(3) 00:13:50.861 fused_ordering(4) 00:13:50.861 fused_ordering(5) 00:13:50.861 fused_ordering(6) 00:13:50.861 fused_ordering(7) 00:13:50.861 fused_ordering(8) 00:13:50.861 fused_ordering(9) 00:13:50.861 fused_ordering(10) 00:13:50.861 fused_ordering(11) 00:13:50.861 fused_ordering(12) 00:13:50.861 fused_ordering(13) 00:13:50.861 fused_ordering(14) 00:13:50.861 fused_ordering(15) 00:13:50.861 fused_ordering(16) 00:13:50.861 fused_ordering(17) 00:13:50.861 fused_ordering(18) 00:13:50.861 fused_ordering(19) 00:13:50.861 fused_ordering(20) 00:13:50.861 fused_ordering(21) 00:13:50.861 fused_ordering(22) 00:13:50.861 fused_ordering(23) 00:13:50.861 fused_ordering(24) 00:13:50.861 fused_ordering(25) 00:13:50.861 fused_ordering(26) 00:13:50.861 fused_ordering(27) 00:13:50.861 fused_ordering(28) 00:13:50.861 fused_ordering(29) 00:13:50.861 fused_ordering(30) 00:13:50.861 fused_ordering(31) 00:13:50.861 fused_ordering(32) 00:13:50.861 fused_ordering(33) 00:13:50.861 fused_ordering(34) 00:13:50.861 fused_ordering(35) 00:13:50.861 fused_ordering(36) 00:13:50.861 fused_ordering(37) 00:13:50.861 fused_ordering(38) 00:13:50.861 fused_ordering(39) 00:13:50.861 fused_ordering(40) 00:13:50.861 fused_ordering(41) 00:13:50.861 fused_ordering(42) 00:13:50.861 fused_ordering(43) 00:13:50.861 fused_ordering(44) 00:13:50.861 fused_ordering(45) 00:13:50.861 fused_ordering(46) 00:13:50.861 fused_ordering(47) 00:13:50.861 fused_ordering(48) 00:13:50.861 fused_ordering(49) 00:13:50.861 fused_ordering(50) 00:13:50.861 fused_ordering(51) 00:13:50.861 fused_ordering(52) 00:13:50.861 fused_ordering(53) 00:13:50.861 fused_ordering(54) 00:13:50.861 fused_ordering(55) 00:13:50.861 fused_ordering(56) 00:13:50.861 fused_ordering(57) 00:13:50.861 fused_ordering(58) 00:13:50.861 fused_ordering(59) 00:13:50.861 fused_ordering(60) 00:13:50.861 fused_ordering(61) 00:13:50.861 fused_ordering(62) 00:13:50.861 fused_ordering(63) 00:13:50.861 fused_ordering(64) 00:13:50.861 fused_ordering(65) 00:13:50.861 fused_ordering(66) 00:13:50.861 fused_ordering(67) 00:13:50.861 fused_ordering(68) 00:13:50.861 fused_ordering(69) 00:13:50.861 fused_ordering(70) 00:13:50.861 fused_ordering(71) 00:13:50.861 fused_ordering(72) 00:13:50.861 fused_ordering(73) 00:13:50.861 fused_ordering(74) 00:13:50.861 fused_ordering(75) 00:13:50.861 fused_ordering(76) 00:13:50.861 fused_ordering(77) 00:13:50.861 fused_ordering(78) 00:13:50.861 fused_ordering(79) 00:13:50.861 fused_ordering(80) 00:13:50.861 fused_ordering(81) 00:13:50.861 fused_ordering(82) 00:13:50.861 fused_ordering(83) 00:13:50.861 fused_ordering(84) 00:13:50.861 fused_ordering(85) 00:13:50.861 fused_ordering(86) 00:13:50.861 fused_ordering(87) 00:13:50.861 fused_ordering(88) 00:13:50.861 fused_ordering(89) 00:13:50.861 fused_ordering(90) 00:13:50.861 fused_ordering(91) 00:13:50.861 fused_ordering(92) 00:13:50.861 fused_ordering(93) 00:13:50.861 fused_ordering(94) 00:13:50.861 fused_ordering(95) 00:13:50.861 fused_ordering(96) 00:13:50.861 fused_ordering(97) 00:13:50.861 fused_ordering(98) 00:13:50.861 fused_ordering(99) 00:13:50.861 fused_ordering(100) 00:13:50.861 fused_ordering(101) 00:13:50.861 fused_ordering(102) 00:13:50.861 fused_ordering(103) 00:13:50.861 fused_ordering(104) 00:13:50.861 fused_ordering(105) 00:13:50.861 fused_ordering(106) 00:13:50.861 fused_ordering(107) 00:13:50.861 fused_ordering(108) 00:13:50.861 fused_ordering(109) 00:13:50.861 fused_ordering(110) 00:13:50.861 fused_ordering(111) 00:13:50.861 fused_ordering(112) 00:13:50.861 fused_ordering(113) 00:13:50.861 fused_ordering(114) 00:13:50.861 fused_ordering(115) 00:13:50.861 fused_ordering(116) 00:13:50.861 fused_ordering(117) 00:13:50.861 fused_ordering(118) 00:13:50.861 fused_ordering(119) 00:13:50.861 fused_ordering(120) 00:13:50.861 fused_ordering(121) 00:13:50.861 fused_ordering(122) 00:13:50.861 fused_ordering(123) 00:13:50.861 fused_ordering(124) 00:13:50.861 fused_ordering(125) 00:13:50.861 fused_ordering(126) 00:13:50.861 fused_ordering(127) 00:13:50.861 fused_ordering(128) 00:13:50.861 fused_ordering(129) 00:13:50.861 fused_ordering(130) 00:13:50.861 fused_ordering(131) 00:13:50.861 fused_ordering(132) 00:13:50.861 fused_ordering(133) 00:13:50.861 fused_ordering(134) 00:13:50.861 fused_ordering(135) 00:13:50.861 fused_ordering(136) 00:13:50.861 fused_ordering(137) 00:13:50.861 fused_ordering(138) 00:13:50.861 fused_ordering(139) 00:13:50.861 fused_ordering(140) 00:13:50.861 fused_ordering(141) 00:13:50.861 fused_ordering(142) 00:13:50.861 fused_ordering(143) 00:13:50.861 fused_ordering(144) 00:13:50.861 fused_ordering(145) 00:13:50.861 fused_ordering(146) 00:13:50.861 fused_ordering(147) 00:13:50.861 fused_ordering(148) 00:13:50.861 fused_ordering(149) 00:13:50.861 fused_ordering(150) 00:13:50.861 fused_ordering(151) 00:13:50.861 fused_ordering(152) 00:13:50.861 fused_ordering(153) 00:13:50.861 fused_ordering(154) 00:13:50.861 fused_ordering(155) 00:13:50.861 fused_ordering(156) 00:13:50.862 fused_ordering(157) 00:13:50.862 fused_ordering(158) 00:13:50.862 fused_ordering(159) 00:13:50.862 fused_ordering(160) 00:13:50.862 fused_ordering(161) 00:13:50.862 fused_ordering(162) 00:13:50.862 fused_ordering(163) 00:13:50.862 fused_ordering(164) 00:13:50.862 fused_ordering(165) 00:13:50.862 fused_ordering(166) 00:13:50.862 fused_ordering(167) 00:13:50.862 fused_ordering(168) 00:13:50.862 fused_ordering(169) 00:13:50.862 fused_ordering(170) 00:13:50.862 fused_ordering(171) 00:13:50.862 fused_ordering(172) 00:13:50.862 fused_ordering(173) 00:13:50.862 fused_ordering(174) 00:13:50.862 fused_ordering(175) 00:13:50.862 fused_ordering(176) 00:13:50.862 fused_ordering(177) 00:13:50.862 fused_ordering(178) 00:13:50.862 fused_ordering(179) 00:13:50.862 fused_ordering(180) 00:13:50.862 fused_ordering(181) 00:13:50.862 fused_ordering(182) 00:13:50.862 fused_ordering(183) 00:13:50.862 fused_ordering(184) 00:13:50.862 fused_ordering(185) 00:13:50.862 fused_ordering(186) 00:13:50.862 fused_ordering(187) 00:13:50.862 fused_ordering(188) 00:13:50.862 fused_ordering(189) 00:13:50.862 fused_ordering(190) 00:13:50.862 fused_ordering(191) 00:13:50.862 fused_ordering(192) 00:13:50.862 fused_ordering(193) 00:13:50.862 fused_ordering(194) 00:13:50.862 fused_ordering(195) 00:13:50.862 fused_ordering(196) 00:13:50.862 fused_ordering(197) 00:13:50.862 fused_ordering(198) 00:13:50.862 fused_ordering(199) 00:13:50.862 fused_ordering(200) 00:13:50.862 fused_ordering(201) 00:13:50.862 fused_ordering(202) 00:13:50.862 fused_ordering(203) 00:13:50.862 fused_ordering(204) 00:13:50.862 fused_ordering(205) 00:13:50.862 fused_ordering(206) 00:13:50.862 fused_ordering(207) 00:13:50.862 fused_ordering(208) 00:13:50.862 fused_ordering(209) 00:13:50.862 fused_ordering(210) 00:13:50.862 fused_ordering(211) 00:13:50.862 fused_ordering(212) 00:13:50.862 fused_ordering(213) 00:13:50.862 fused_ordering(214) 00:13:50.862 fused_ordering(215) 00:13:50.862 fused_ordering(216) 00:13:50.862 fused_ordering(217) 00:13:50.862 fused_ordering(218) 00:13:50.862 fused_ordering(219) 00:13:50.862 fused_ordering(220) 00:13:50.862 fused_ordering(221) 00:13:50.862 fused_ordering(222) 00:13:50.862 fused_ordering(223) 00:13:50.862 fused_ordering(224) 00:13:50.862 fused_ordering(225) 00:13:50.862 fused_ordering(226) 00:13:50.862 fused_ordering(227) 00:13:50.862 fused_ordering(228) 00:13:50.862 fused_ordering(229) 00:13:50.862 fused_ordering(230) 00:13:50.862 fused_ordering(231) 00:13:50.862 fused_ordering(232) 00:13:50.862 fused_ordering(233) 00:13:50.862 fused_ordering(234) 00:13:50.862 fused_ordering(235) 00:13:50.862 fused_ordering(236) 00:13:50.862 fused_ordering(237) 00:13:50.862 fused_ordering(238) 00:13:50.862 fused_ordering(239) 00:13:50.862 fused_ordering(240) 00:13:50.862 fused_ordering(241) 00:13:50.862 fused_ordering(242) 00:13:50.862 fused_ordering(243) 00:13:50.862 fused_ordering(244) 00:13:50.862 fused_ordering(245) 00:13:50.862 fused_ordering(246) 00:13:50.862 fused_ordering(247) 00:13:50.862 fused_ordering(248) 00:13:50.862 fused_ordering(249) 00:13:50.862 fused_ordering(250) 00:13:50.862 fused_ordering(251) 00:13:50.862 fused_ordering(252) 00:13:50.862 fused_ordering(253) 00:13:50.862 fused_ordering(254) 00:13:50.862 fused_ordering(255) 00:13:50.862 fused_ordering(256) 00:13:50.862 fused_ordering(257) 00:13:50.862 fused_ordering(258) 00:13:50.862 fused_ordering(259) 00:13:50.862 fused_ordering(260) 00:13:50.862 fused_ordering(261) 00:13:50.862 fused_ordering(262) 00:13:50.862 fused_ordering(263) 00:13:50.862 fused_ordering(264) 00:13:50.862 fused_ordering(265) 00:13:50.862 fused_ordering(266) 00:13:50.862 fused_ordering(267) 00:13:50.862 fused_ordering(268) 00:13:50.862 fused_ordering(269) 00:13:50.862 fused_ordering(270) 00:13:50.862 fused_ordering(271) 00:13:50.862 fused_ordering(272) 00:13:50.862 fused_ordering(273) 00:13:50.862 fused_ordering(274) 00:13:50.862 fused_ordering(275) 00:13:50.862 fused_ordering(276) 00:13:50.862 fused_ordering(277) 00:13:50.862 fused_ordering(278) 00:13:50.862 fused_ordering(279) 00:13:50.862 fused_ordering(280) 00:13:50.862 fused_ordering(281) 00:13:50.862 fused_ordering(282) 00:13:50.862 fused_ordering(283) 00:13:50.862 fused_ordering(284) 00:13:50.862 fused_ordering(285) 00:13:50.862 fused_ordering(286) 00:13:50.862 fused_ordering(287) 00:13:50.862 fused_ordering(288) 00:13:50.862 fused_ordering(289) 00:13:50.862 fused_ordering(290) 00:13:50.862 fused_ordering(291) 00:13:50.862 fused_ordering(292) 00:13:50.862 fused_ordering(293) 00:13:50.862 fused_ordering(294) 00:13:50.862 fused_ordering(295) 00:13:50.862 fused_ordering(296) 00:13:50.862 fused_ordering(297) 00:13:50.862 fused_ordering(298) 00:13:50.862 fused_ordering(299) 00:13:50.862 fused_ordering(300) 00:13:50.862 fused_ordering(301) 00:13:50.862 fused_ordering(302) 00:13:50.862 fused_ordering(303) 00:13:50.862 fused_ordering(304) 00:13:50.862 fused_ordering(305) 00:13:50.862 fused_ordering(306) 00:13:50.862 fused_ordering(307) 00:13:50.862 fused_ordering(308) 00:13:50.862 fused_ordering(309) 00:13:50.862 fused_ordering(310) 00:13:50.862 fused_ordering(311) 00:13:50.862 fused_ordering(312) 00:13:50.862 fused_ordering(313) 00:13:50.862 fused_ordering(314) 00:13:50.862 fused_ordering(315) 00:13:50.862 fused_ordering(316) 00:13:50.862 fused_ordering(317) 00:13:50.862 fused_ordering(318) 00:13:50.862 fused_ordering(319) 00:13:50.862 fused_ordering(320) 00:13:50.862 fused_ordering(321) 00:13:50.862 fused_ordering(322) 00:13:50.862 fused_ordering(323) 00:13:50.862 fused_ordering(324) 00:13:50.862 fused_ordering(325) 00:13:50.862 fused_ordering(326) 00:13:50.862 fused_ordering(327) 00:13:50.862 fused_ordering(328) 00:13:50.862 fused_ordering(329) 00:13:50.862 fused_ordering(330) 00:13:50.862 fused_ordering(331) 00:13:50.862 fused_ordering(332) 00:13:50.862 fused_ordering(333) 00:13:50.862 fused_ordering(334) 00:13:50.862 fused_ordering(335) 00:13:50.862 fused_ordering(336) 00:13:50.862 fused_ordering(337) 00:13:50.862 fused_ordering(338) 00:13:50.862 fused_ordering(339) 00:13:50.862 fused_ordering(340) 00:13:50.862 fused_ordering(341) 00:13:50.862 fused_ordering(342) 00:13:50.862 fused_ordering(343) 00:13:50.862 fused_ordering(344) 00:13:50.862 fused_ordering(345) 00:13:50.862 fused_ordering(346) 00:13:50.862 fused_ordering(347) 00:13:50.862 fused_ordering(348) 00:13:50.862 fused_ordering(349) 00:13:50.862 fused_ordering(350) 00:13:50.862 fused_ordering(351) 00:13:50.862 fused_ordering(352) 00:13:50.862 fused_ordering(353) 00:13:50.862 fused_ordering(354) 00:13:50.862 fused_ordering(355) 00:13:50.862 fused_ordering(356) 00:13:50.862 fused_ordering(357) 00:13:50.862 fused_ordering(358) 00:13:50.862 fused_ordering(359) 00:13:50.862 fused_ordering(360) 00:13:50.862 fused_ordering(361) 00:13:50.862 fused_ordering(362) 00:13:50.862 fused_ordering(363) 00:13:50.862 fused_ordering(364) 00:13:50.862 fused_ordering(365) 00:13:50.862 fused_ordering(366) 00:13:50.862 fused_ordering(367) 00:13:50.862 fused_ordering(368) 00:13:50.862 fused_ordering(369) 00:13:50.862 fused_ordering(370) 00:13:50.862 fused_ordering(371) 00:13:50.862 fused_ordering(372) 00:13:50.862 fused_ordering(373) 00:13:50.862 fused_ordering(374) 00:13:50.862 fused_ordering(375) 00:13:50.862 fused_ordering(376) 00:13:50.862 fused_ordering(377) 00:13:50.862 fused_ordering(378) 00:13:50.862 fused_ordering(379) 00:13:50.862 fused_ordering(380) 00:13:50.862 fused_ordering(381) 00:13:50.862 fused_ordering(382) 00:13:50.863 fused_ordering(383) 00:13:50.863 fused_ordering(384) 00:13:50.863 fused_ordering(385) 00:13:50.863 fused_ordering(386) 00:13:50.863 fused_ordering(387) 00:13:50.863 fused_ordering(388) 00:13:50.863 fused_ordering(389) 00:13:50.863 fused_ordering(390) 00:13:50.863 fused_ordering(391) 00:13:50.863 fused_ordering(392) 00:13:50.863 fused_ordering(393) 00:13:50.863 fused_ordering(394) 00:13:50.863 fused_ordering(395) 00:13:50.863 fused_ordering(396) 00:13:50.863 fused_ordering(397) 00:13:50.863 fused_ordering(398) 00:13:50.863 fused_ordering(399) 00:13:50.863 fused_ordering(400) 00:13:50.863 fused_ordering(401) 00:13:50.863 fused_ordering(402) 00:13:50.863 fused_ordering(403) 00:13:50.863 fused_ordering(404) 00:13:50.863 fused_ordering(405) 00:13:50.863 fused_ordering(406) 00:13:50.863 fused_ordering(407) 00:13:50.863 fused_ordering(408) 00:13:50.863 fused_ordering(409) 00:13:50.863 fused_ordering(410) 00:13:51.429 fused_ordering(411) 00:13:51.429 fused_ordering(412) 00:13:51.429 fused_ordering(413) 00:13:51.429 fused_ordering(414) 00:13:51.429 fused_ordering(415) 00:13:51.429 fused_ordering(416) 00:13:51.429 fused_ordering(417) 00:13:51.429 fused_ordering(418) 00:13:51.429 fused_ordering(419) 00:13:51.429 fused_ordering(420) 00:13:51.429 fused_ordering(421) 00:13:51.429 fused_ordering(422) 00:13:51.429 fused_ordering(423) 00:13:51.429 fused_ordering(424) 00:13:51.429 fused_ordering(425) 00:13:51.429 fused_ordering(426) 00:13:51.429 fused_ordering(427) 00:13:51.429 fused_ordering(428) 00:13:51.429 fused_ordering(429) 00:13:51.429 fused_ordering(430) 00:13:51.429 fused_ordering(431) 00:13:51.429 fused_ordering(432) 00:13:51.429 fused_ordering(433) 00:13:51.429 fused_ordering(434) 00:13:51.429 fused_ordering(435) 00:13:51.429 fused_ordering(436) 00:13:51.429 fused_ordering(437) 00:13:51.429 fused_ordering(438) 00:13:51.429 fused_ordering(439) 00:13:51.429 fused_ordering(440) 00:13:51.429 fused_ordering(441) 00:13:51.429 fused_ordering(442) 00:13:51.429 fused_ordering(443) 00:13:51.429 fused_ordering(444) 00:13:51.429 fused_ordering(445) 00:13:51.429 fused_ordering(446) 00:13:51.429 fused_ordering(447) 00:13:51.429 fused_ordering(448) 00:13:51.429 fused_ordering(449) 00:13:51.429 fused_ordering(450) 00:13:51.429 fused_ordering(451) 00:13:51.429 fused_ordering(452) 00:13:51.429 fused_ordering(453) 00:13:51.429 fused_ordering(454) 00:13:51.429 fused_ordering(455) 00:13:51.429 fused_ordering(456) 00:13:51.429 fused_ordering(457) 00:13:51.429 fused_ordering(458) 00:13:51.429 fused_ordering(459) 00:13:51.429 fused_ordering(460) 00:13:51.429 fused_ordering(461) 00:13:51.429 fused_ordering(462) 00:13:51.429 fused_ordering(463) 00:13:51.429 fused_ordering(464) 00:13:51.429 fused_ordering(465) 00:13:51.429 fused_ordering(466) 00:13:51.429 fused_ordering(467) 00:13:51.429 fused_ordering(468) 00:13:51.429 fused_ordering(469) 00:13:51.429 fused_ordering(470) 00:13:51.429 fused_ordering(471) 00:13:51.429 fused_ordering(472) 00:13:51.429 fused_ordering(473) 00:13:51.429 fused_ordering(474) 00:13:51.429 fused_ordering(475) 00:13:51.429 fused_ordering(476) 00:13:51.429 fused_ordering(477) 00:13:51.429 fused_ordering(478) 00:13:51.429 fused_ordering(479) 00:13:51.429 fused_ordering(480) 00:13:51.429 fused_ordering(481) 00:13:51.429 fused_ordering(482) 00:13:51.429 fused_ordering(483) 00:13:51.429 fused_ordering(484) 00:13:51.429 fused_ordering(485) 00:13:51.429 fused_ordering(486) 00:13:51.429 fused_ordering(487) 00:13:51.429 fused_ordering(488) 00:13:51.429 fused_ordering(489) 00:13:51.429 fused_ordering(490) 00:13:51.429 fused_ordering(491) 00:13:51.429 fused_ordering(492) 00:13:51.429 fused_ordering(493) 00:13:51.429 fused_ordering(494) 00:13:51.429 fused_ordering(495) 00:13:51.429 fused_ordering(496) 00:13:51.429 fused_ordering(497) 00:13:51.429 fused_ordering(498) 00:13:51.429 fused_ordering(499) 00:13:51.429 fused_ordering(500) 00:13:51.429 fused_ordering(501) 00:13:51.429 fused_ordering(502) 00:13:51.429 fused_ordering(503) 00:13:51.429 fused_ordering(504) 00:13:51.429 fused_ordering(505) 00:13:51.429 fused_ordering(506) 00:13:51.429 fused_ordering(507) 00:13:51.429 fused_ordering(508) 00:13:51.429 fused_ordering(509) 00:13:51.429 fused_ordering(510) 00:13:51.429 fused_ordering(511) 00:13:51.429 fused_ordering(512) 00:13:51.429 fused_ordering(513) 00:13:51.429 fused_ordering(514) 00:13:51.429 fused_ordering(515) 00:13:51.429 fused_ordering(516) 00:13:51.429 fused_ordering(517) 00:13:51.429 fused_ordering(518) 00:13:51.429 fused_ordering(519) 00:13:51.429 fused_ordering(520) 00:13:51.429 fused_ordering(521) 00:13:51.429 fused_ordering(522) 00:13:51.429 fused_ordering(523) 00:13:51.429 fused_ordering(524) 00:13:51.429 fused_ordering(525) 00:13:51.429 fused_ordering(526) 00:13:51.429 fused_ordering(527) 00:13:51.429 fused_ordering(528) 00:13:51.429 fused_ordering(529) 00:13:51.429 fused_ordering(530) 00:13:51.429 fused_ordering(531) 00:13:51.429 fused_ordering(532) 00:13:51.429 fused_ordering(533) 00:13:51.429 fused_ordering(534) 00:13:51.429 fused_ordering(535) 00:13:51.429 fused_ordering(536) 00:13:51.429 fused_ordering(537) 00:13:51.429 fused_ordering(538) 00:13:51.429 fused_ordering(539) 00:13:51.429 fused_ordering(540) 00:13:51.429 fused_ordering(541) 00:13:51.429 fused_ordering(542) 00:13:51.429 fused_ordering(543) 00:13:51.429 fused_ordering(544) 00:13:51.429 fused_ordering(545) 00:13:51.429 fused_ordering(546) 00:13:51.429 fused_ordering(547) 00:13:51.429 fused_ordering(548) 00:13:51.429 fused_ordering(549) 00:13:51.429 fused_ordering(550) 00:13:51.429 fused_ordering(551) 00:13:51.429 fused_ordering(552) 00:13:51.429 fused_ordering(553) 00:13:51.429 fused_ordering(554) 00:13:51.429 fused_ordering(555) 00:13:51.429 fused_ordering(556) 00:13:51.429 fused_ordering(557) 00:13:51.429 fused_ordering(558) 00:13:51.429 fused_ordering(559) 00:13:51.429 fused_ordering(560) 00:13:51.429 fused_ordering(561) 00:13:51.430 fused_ordering(562) 00:13:51.430 fused_ordering(563) 00:13:51.430 fused_ordering(564) 00:13:51.430 fused_ordering(565) 00:13:51.430 fused_ordering(566) 00:13:51.430 fused_ordering(567) 00:13:51.430 fused_ordering(568) 00:13:51.430 fused_ordering(569) 00:13:51.430 fused_ordering(570) 00:13:51.430 fused_ordering(571) 00:13:51.430 fused_ordering(572) 00:13:51.430 fused_ordering(573) 00:13:51.430 fused_ordering(574) 00:13:51.430 fused_ordering(575) 00:13:51.430 fused_ordering(576) 00:13:51.430 fused_ordering(577) 00:13:51.430 fused_ordering(578) 00:13:51.430 fused_ordering(579) 00:13:51.430 fused_ordering(580) 00:13:51.430 fused_ordering(581) 00:13:51.430 fused_ordering(582) 00:13:51.430 fused_ordering(583) 00:13:51.430 fused_ordering(584) 00:13:51.430 fused_ordering(585) 00:13:51.430 fused_ordering(586) 00:13:51.430 fused_ordering(587) 00:13:51.430 fused_ordering(588) 00:13:51.430 fused_ordering(589) 00:13:51.430 fused_ordering(590) 00:13:51.430 fused_ordering(591) 00:13:51.430 fused_ordering(592) 00:13:51.430 fused_ordering(593) 00:13:51.430 fused_ordering(594) 00:13:51.430 fused_ordering(595) 00:13:51.430 fused_ordering(596) 00:13:51.430 fused_ordering(597) 00:13:51.430 fused_ordering(598) 00:13:51.430 fused_ordering(599) 00:13:51.430 fused_ordering(600) 00:13:51.430 fused_ordering(601) 00:13:51.430 fused_ordering(602) 00:13:51.430 fused_ordering(603) 00:13:51.430 fused_ordering(604) 00:13:51.430 fused_ordering(605) 00:13:51.430 fused_ordering(606) 00:13:51.430 fused_ordering(607) 00:13:51.430 fused_ordering(608) 00:13:51.430 fused_ordering(609) 00:13:51.430 fused_ordering(610) 00:13:51.430 fused_ordering(611) 00:13:51.430 fused_ordering(612) 00:13:51.430 fused_ordering(613) 00:13:51.430 fused_ordering(614) 00:13:51.430 fused_ordering(615) 00:13:51.688 fused_ordering(616) 00:13:51.688 fused_ordering(617) 00:13:51.688 fused_ordering(618) 00:13:51.688 fused_ordering(619) 00:13:51.688 fused_ordering(620) 00:13:51.688 fused_ordering(621) 00:13:51.688 fused_ordering(622) 00:13:51.688 fused_ordering(623) 00:13:51.688 fused_ordering(624) 00:13:51.688 fused_ordering(625) 00:13:51.688 fused_ordering(626) 00:13:51.688 fused_ordering(627) 00:13:51.688 fused_ordering(628) 00:13:51.688 fused_ordering(629) 00:13:51.688 fused_ordering(630) 00:13:51.688 fused_ordering(631) 00:13:51.688 fused_ordering(632) 00:13:51.688 fused_ordering(633) 00:13:51.688 fused_ordering(634) 00:13:51.688 fused_ordering(635) 00:13:51.688 fused_ordering(636) 00:13:51.688 fused_ordering(637) 00:13:51.688 fused_ordering(638) 00:13:51.688 fused_ordering(639) 00:13:51.688 fused_ordering(640) 00:13:51.688 fused_ordering(641) 00:13:51.688 fused_ordering(642) 00:13:51.688 fused_ordering(643) 00:13:51.688 fused_ordering(644) 00:13:51.688 fused_ordering(645) 00:13:51.688 fused_ordering(646) 00:13:51.688 fused_ordering(647) 00:13:51.688 fused_ordering(648) 00:13:51.688 fused_ordering(649) 00:13:51.688 fused_ordering(650) 00:13:51.688 fused_ordering(651) 00:13:51.688 fused_ordering(652) 00:13:51.688 fused_ordering(653) 00:13:51.688 fused_ordering(654) 00:13:51.688 fused_ordering(655) 00:13:51.688 fused_ordering(656) 00:13:51.688 fused_ordering(657) 00:13:51.688 fused_ordering(658) 00:13:51.688 fused_ordering(659) 00:13:51.688 fused_ordering(660) 00:13:51.688 fused_ordering(661) 00:13:51.688 fused_ordering(662) 00:13:51.688 fused_ordering(663) 00:13:51.688 fused_ordering(664) 00:13:51.688 fused_ordering(665) 00:13:51.688 fused_ordering(666) 00:13:51.688 fused_ordering(667) 00:13:51.688 fused_ordering(668) 00:13:51.688 fused_ordering(669) 00:13:51.688 fused_ordering(670) 00:13:51.688 fused_ordering(671) 00:13:51.688 fused_ordering(672) 00:13:51.688 fused_ordering(673) 00:13:51.688 fused_ordering(674) 00:13:51.688 fused_ordering(675) 00:13:51.688 fused_ordering(676) 00:13:51.688 fused_ordering(677) 00:13:51.688 fused_ordering(678) 00:13:51.688 fused_ordering(679) 00:13:51.688 fused_ordering(680) 00:13:51.688 fused_ordering(681) 00:13:51.688 fused_ordering(682) 00:13:51.688 fused_ordering(683) 00:13:51.688 fused_ordering(684) 00:13:51.688 fused_ordering(685) 00:13:51.688 fused_ordering(686) 00:13:51.688 fused_ordering(687) 00:13:51.688 fused_ordering(688) 00:13:51.688 fused_ordering(689) 00:13:51.688 fused_ordering(690) 00:13:51.688 fused_ordering(691) 00:13:51.688 fused_ordering(692) 00:13:51.688 fused_ordering(693) 00:13:51.688 fused_ordering(694) 00:13:51.688 fused_ordering(695) 00:13:51.688 fused_ordering(696) 00:13:51.688 fused_ordering(697) 00:13:51.688 fused_ordering(698) 00:13:51.688 fused_ordering(699) 00:13:51.688 fused_ordering(700) 00:13:51.688 fused_ordering(701) 00:13:51.688 fused_ordering(702) 00:13:51.688 fused_ordering(703) 00:13:51.688 fused_ordering(704) 00:13:51.688 fused_ordering(705) 00:13:51.688 fused_ordering(706) 00:13:51.688 fused_ordering(707) 00:13:51.688 fused_ordering(708) 00:13:51.688 fused_ordering(709) 00:13:51.688 fused_ordering(710) 00:13:51.688 fused_ordering(711) 00:13:51.688 fused_ordering(712) 00:13:51.688 fused_ordering(713) 00:13:51.688 fused_ordering(714) 00:13:51.688 fused_ordering(715) 00:13:51.688 fused_ordering(716) 00:13:51.688 fused_ordering(717) 00:13:51.688 fused_ordering(718) 00:13:51.688 fused_ordering(719) 00:13:51.688 fused_ordering(720) 00:13:51.688 fused_ordering(721) 00:13:51.689 fused_ordering(722) 00:13:51.689 fused_ordering(723) 00:13:51.689 fused_ordering(724) 00:13:51.689 fused_ordering(725) 00:13:51.689 fused_ordering(726) 00:13:51.689 fused_ordering(727) 00:13:51.689 fused_ordering(728) 00:13:51.689 fused_ordering(729) 00:13:51.689 fused_ordering(730) 00:13:51.689 fused_ordering(731) 00:13:51.689 fused_ordering(732) 00:13:51.689 fused_ordering(733) 00:13:51.689 fused_ordering(734) 00:13:51.689 fused_ordering(735) 00:13:51.689 fused_ordering(736) 00:13:51.689 fused_ordering(737) 00:13:51.689 fused_ordering(738) 00:13:51.689 fused_ordering(739) 00:13:51.689 fused_ordering(740) 00:13:51.689 fused_ordering(741) 00:13:51.689 fused_ordering(742) 00:13:51.689 fused_ordering(743) 00:13:51.689 fused_ordering(744) 00:13:51.689 fused_ordering(745) 00:13:51.689 fused_ordering(746) 00:13:51.689 fused_ordering(747) 00:13:51.689 fused_ordering(748) 00:13:51.689 fused_ordering(749) 00:13:51.689 fused_ordering(750) 00:13:51.689 fused_ordering(751) 00:13:51.689 fused_ordering(752) 00:13:51.689 fused_ordering(753) 00:13:51.689 fused_ordering(754) 00:13:51.689 fused_ordering(755) 00:13:51.689 fused_ordering(756) 00:13:51.689 fused_ordering(757) 00:13:51.689 fused_ordering(758) 00:13:51.689 fused_ordering(759) 00:13:51.689 fused_ordering(760) 00:13:51.689 fused_ordering(761) 00:13:51.689 fused_ordering(762) 00:13:51.689 fused_ordering(763) 00:13:51.689 fused_ordering(764) 00:13:51.689 fused_ordering(765) 00:13:51.689 fused_ordering(766) 00:13:51.689 fused_ordering(767) 00:13:51.689 fused_ordering(768) 00:13:51.689 fused_ordering(769) 00:13:51.689 fused_ordering(770) 00:13:51.689 fused_ordering(771) 00:13:51.689 fused_ordering(772) 00:13:51.689 fused_ordering(773) 00:13:51.689 fused_ordering(774) 00:13:51.689 fused_ordering(775) 00:13:51.689 fused_ordering(776) 00:13:51.689 fused_ordering(777) 00:13:51.689 fused_ordering(778) 00:13:51.689 fused_ordering(779) 00:13:51.689 fused_ordering(780) 00:13:51.689 fused_ordering(781) 00:13:51.689 fused_ordering(782) 00:13:51.689 fused_ordering(783) 00:13:51.689 fused_ordering(784) 00:13:51.689 fused_ordering(785) 00:13:51.689 fused_ordering(786) 00:13:51.689 fused_ordering(787) 00:13:51.689 fused_ordering(788) 00:13:51.689 fused_ordering(789) 00:13:51.689 fused_ordering(790) 00:13:51.689 fused_ordering(791) 00:13:51.689 fused_ordering(792) 00:13:51.689 fused_ordering(793) 00:13:51.689 fused_ordering(794) 00:13:51.689 fused_ordering(795) 00:13:51.689 fused_ordering(796) 00:13:51.689 fused_ordering(797) 00:13:51.689 fused_ordering(798) 00:13:51.689 fused_ordering(799) 00:13:51.689 fused_ordering(800) 00:13:51.689 fused_ordering(801) 00:13:51.689 fused_ordering(802) 00:13:51.689 fused_ordering(803) 00:13:51.689 fused_ordering(804) 00:13:51.689 fused_ordering(805) 00:13:51.689 fused_ordering(806) 00:13:51.689 fused_ordering(807) 00:13:51.689 fused_ordering(808) 00:13:51.689 fused_ordering(809) 00:13:51.689 fused_ordering(810) 00:13:51.689 fused_ordering(811) 00:13:51.689 fused_ordering(812) 00:13:51.689 fused_ordering(813) 00:13:51.689 fused_ordering(814) 00:13:51.689 fused_ordering(815) 00:13:51.689 fused_ordering(816) 00:13:51.689 fused_ordering(817) 00:13:51.689 fused_ordering(818) 00:13:51.689 fused_ordering(819) 00:13:51.689 fused_ordering(820) 00:13:52.255 fused_ordering(821) 00:13:52.255 fused_ordering(822) 00:13:52.255 fused_ordering(823) 00:13:52.255 fused_ordering(824) 00:13:52.255 fused_ordering(825) 00:13:52.255 fused_ordering(826) 00:13:52.255 fused_ordering(827) 00:13:52.255 fused_ordering(828) 00:13:52.255 fused_ordering(829) 00:13:52.255 fused_ordering(830) 00:13:52.255 fused_ordering(831) 00:13:52.255 fused_ordering(832) 00:13:52.255 fused_ordering(833) 00:13:52.255 fused_ordering(834) 00:13:52.255 fused_ordering(835) 00:13:52.255 fused_ordering(836) 00:13:52.255 fused_ordering(837) 00:13:52.255 fused_ordering(838) 00:13:52.255 fused_ordering(839) 00:13:52.255 fused_ordering(840) 00:13:52.255 fused_ordering(841) 00:13:52.255 fused_ordering(842) 00:13:52.255 fused_ordering(843) 00:13:52.255 fused_ordering(844) 00:13:52.255 fused_ordering(845) 00:13:52.255 fused_ordering(846) 00:13:52.255 fused_ordering(847) 00:13:52.255 fused_ordering(848) 00:13:52.255 fused_ordering(849) 00:13:52.255 fused_ordering(850) 00:13:52.255 fused_ordering(851) 00:13:52.255 fused_ordering(852) 00:13:52.255 fused_ordering(853) 00:13:52.255 fused_ordering(854) 00:13:52.255 fused_ordering(855) 00:13:52.255 fused_ordering(856) 00:13:52.255 fused_ordering(857) 00:13:52.255 fused_ordering(858) 00:13:52.255 fused_ordering(859) 00:13:52.255 fused_ordering(860) 00:13:52.255 fused_ordering(861) 00:13:52.255 fused_ordering(862) 00:13:52.255 fused_ordering(863) 00:13:52.255 fused_ordering(864) 00:13:52.255 fused_ordering(865) 00:13:52.255 fused_ordering(866) 00:13:52.255 fused_ordering(867) 00:13:52.255 fused_ordering(868) 00:13:52.255 fused_ordering(869) 00:13:52.255 fused_ordering(870) 00:13:52.255 fused_ordering(871) 00:13:52.255 fused_ordering(872) 00:13:52.255 fused_ordering(873) 00:13:52.255 fused_ordering(874) 00:13:52.255 fused_ordering(875) 00:13:52.255 fused_ordering(876) 00:13:52.255 fused_ordering(877) 00:13:52.255 fused_ordering(878) 00:13:52.255 fused_ordering(879) 00:13:52.255 fused_ordering(880) 00:13:52.255 fused_ordering(881) 00:13:52.255 fused_ordering(882) 00:13:52.255 fused_ordering(883) 00:13:52.255 fused_ordering(884) 00:13:52.255 fused_ordering(885) 00:13:52.255 fused_ordering(886) 00:13:52.255 fused_ordering(887) 00:13:52.255 fused_ordering(888) 00:13:52.255 fused_ordering(889) 00:13:52.255 fused_ordering(890) 00:13:52.255 fused_ordering(891) 00:13:52.255 fused_ordering(892) 00:13:52.255 fused_ordering(893) 00:13:52.255 fused_ordering(894) 00:13:52.255 fused_ordering(895) 00:13:52.255 fused_ordering(896) 00:13:52.255 fused_ordering(897) 00:13:52.255 fused_ordering(898) 00:13:52.255 fused_ordering(899) 00:13:52.255 fused_ordering(900) 00:13:52.255 fused_ordering(901) 00:13:52.255 fused_ordering(902) 00:13:52.255 fused_ordering(903) 00:13:52.255 fused_ordering(904) 00:13:52.255 fused_ordering(905) 00:13:52.255 fused_ordering(906) 00:13:52.255 fused_ordering(907) 00:13:52.255 fused_ordering(908) 00:13:52.255 fused_ordering(909) 00:13:52.255 fused_ordering(910) 00:13:52.255 fused_ordering(911) 00:13:52.255 fused_ordering(912) 00:13:52.255 fused_ordering(913) 00:13:52.255 fused_ordering(914) 00:13:52.255 fused_ordering(915) 00:13:52.255 fused_ordering(916) 00:13:52.255 fused_ordering(917) 00:13:52.255 fused_ordering(918) 00:13:52.255 fused_ordering(919) 00:13:52.255 fused_ordering(920) 00:13:52.255 fused_ordering(921) 00:13:52.255 fused_ordering(922) 00:13:52.255 fused_ordering(923) 00:13:52.255 fused_ordering(924) 00:13:52.255 fused_ordering(925) 00:13:52.255 fused_ordering(926) 00:13:52.255 fused_ordering(927) 00:13:52.255 fused_ordering(928) 00:13:52.255 fused_ordering(929) 00:13:52.255 fused_ordering(930) 00:13:52.255 fused_ordering(931) 00:13:52.255 fused_ordering(932) 00:13:52.255 fused_ordering(933) 00:13:52.255 fused_ordering(934) 00:13:52.255 fused_ordering(935) 00:13:52.255 fused_ordering(936) 00:13:52.255 fused_ordering(937) 00:13:52.255 fused_ordering(938) 00:13:52.255 fused_ordering(939) 00:13:52.255 fused_ordering(940) 00:13:52.255 fused_ordering(941) 00:13:52.255 fused_ordering(942) 00:13:52.255 fused_ordering(943) 00:13:52.255 fused_ordering(944) 00:13:52.255 fused_ordering(945) 00:13:52.255 fused_ordering(946) 00:13:52.255 fused_ordering(947) 00:13:52.255 fused_ordering(948) 00:13:52.255 fused_ordering(949) 00:13:52.255 fused_ordering(950) 00:13:52.255 fused_ordering(951) 00:13:52.255 fused_ordering(952) 00:13:52.255 fused_ordering(953) 00:13:52.255 fused_ordering(954) 00:13:52.255 fused_ordering(955) 00:13:52.255 fused_ordering(956) 00:13:52.255 fused_ordering(957) 00:13:52.255 fused_ordering(958) 00:13:52.255 fused_ordering(959) 00:13:52.255 fused_ordering(960) 00:13:52.255 fused_ordering(961) 00:13:52.255 fused_ordering(962) 00:13:52.255 fused_ordering(963) 00:13:52.255 fused_ordering(964) 00:13:52.255 fused_ordering(965) 00:13:52.255 fused_ordering(966) 00:13:52.255 fused_ordering(967) 00:13:52.255 fused_ordering(968) 00:13:52.255 fused_ordering(969) 00:13:52.255 fused_ordering(970) 00:13:52.255 fused_ordering(971) 00:13:52.255 fused_ordering(972) 00:13:52.255 fused_ordering(973) 00:13:52.255 fused_ordering(974) 00:13:52.255 fused_ordering(975) 00:13:52.255 fused_ordering(976) 00:13:52.255 fused_ordering(977) 00:13:52.255 fused_ordering(978) 00:13:52.255 fused_ordering(979) 00:13:52.255 fused_ordering(980) 00:13:52.255 fused_ordering(981) 00:13:52.255 fused_ordering(982) 00:13:52.255 fused_ordering(983) 00:13:52.255 fused_ordering(984) 00:13:52.255 fused_ordering(985) 00:13:52.255 fused_ordering(986) 00:13:52.255 fused_ordering(987) 00:13:52.255 fused_ordering(988) 00:13:52.255 fused_ordering(989) 00:13:52.255 fused_ordering(990) 00:13:52.255 fused_ordering(991) 00:13:52.255 fused_ordering(992) 00:13:52.255 fused_ordering(993) 00:13:52.255 fused_ordering(994) 00:13:52.255 fused_ordering(995) 00:13:52.255 fused_ordering(996) 00:13:52.255 fused_ordering(997) 00:13:52.255 fused_ordering(998) 00:13:52.255 fused_ordering(999) 00:13:52.255 fused_ordering(1000) 00:13:52.255 fused_ordering(1001) 00:13:52.255 fused_ordering(1002) 00:13:52.255 fused_ordering(1003) 00:13:52.255 fused_ordering(1004) 00:13:52.255 fused_ordering(1005) 00:13:52.255 fused_ordering(1006) 00:13:52.255 fused_ordering(1007) 00:13:52.255 fused_ordering(1008) 00:13:52.255 fused_ordering(1009) 00:13:52.255 fused_ordering(1010) 00:13:52.255 fused_ordering(1011) 00:13:52.255 fused_ordering(1012) 00:13:52.255 fused_ordering(1013) 00:13:52.255 fused_ordering(1014) 00:13:52.255 fused_ordering(1015) 00:13:52.255 fused_ordering(1016) 00:13:52.255 fused_ordering(1017) 00:13:52.255 fused_ordering(1018) 00:13:52.255 fused_ordering(1019) 00:13:52.255 fused_ordering(1020) 00:13:52.255 fused_ordering(1021) 00:13:52.255 fused_ordering(1022) 00:13:52.255 fused_ordering(1023) 00:13:52.255 18:29:59 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:52.255 18:29:59 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:52.255 18:29:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:52.255 18:29:59 -- nvmf/common.sh@116 -- # sync 00:13:52.255 18:29:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:52.255 18:29:59 -- nvmf/common.sh@119 -- # set +e 00:13:52.255 18:29:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:52.255 18:29:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:52.255 rmmod nvme_tcp 00:13:52.255 rmmod nvme_fabrics 00:13:52.255 rmmod nvme_keyring 00:13:52.255 18:29:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:52.255 18:29:59 -- nvmf/common.sh@123 -- # set -e 00:13:52.255 18:29:59 -- nvmf/common.sh@124 -- # return 0 00:13:52.255 18:29:59 -- nvmf/common.sh@477 -- # '[' -n 81915 ']' 00:13:52.255 18:29:59 -- nvmf/common.sh@478 -- # killprocess 81915 00:13:52.255 18:29:59 -- common/autotest_common.sh@926 -- # '[' -z 81915 ']' 00:13:52.255 18:29:59 -- common/autotest_common.sh@930 -- # kill -0 81915 00:13:52.255 18:29:59 -- common/autotest_common.sh@931 -- # uname 00:13:52.255 18:29:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:52.255 18:29:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81915 00:13:52.255 killing process with pid 81915 00:13:52.255 18:29:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:52.255 18:29:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:52.255 18:29:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81915' 00:13:52.255 18:29:59 -- common/autotest_common.sh@945 -- # kill 81915 00:13:52.255 18:29:59 -- common/autotest_common.sh@950 -- # wait 81915 00:13:52.513 18:29:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:52.513 18:29:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:52.513 18:29:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:52.513 18:29:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.513 18:29:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:52.513 18:29:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.514 18:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.514 18:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.514 18:29:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:52.514 00:13:52.514 real 0m3.812s 00:13:52.514 user 0m4.521s 00:13:52.514 sys 0m1.266s 00:13:52.514 18:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.514 18:29:59 -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 ************************************ 00:13:52.514 END TEST nvmf_fused_ordering 00:13:52.514 ************************************ 00:13:52.514 18:29:59 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:52.514 18:29:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:52.514 18:29:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.514 18:29:59 -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 ************************************ 00:13:52.514 START TEST nvmf_delete_subsystem 00:13:52.514 ************************************ 00:13:52.514 18:29:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:52.514 * Looking for test storage... 00:13:52.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.514 18:29:59 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.514 18:29:59 -- nvmf/common.sh@7 -- # uname -s 00:13:52.514 18:29:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.514 18:29:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.514 18:29:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.514 18:29:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.514 18:29:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.514 18:29:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.514 18:29:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.514 18:29:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.514 18:29:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.514 18:29:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.772 18:29:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:13:52.772 18:29:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:13:52.772 18:29:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.772 18:29:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.772 18:29:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.772 18:29:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.772 18:29:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.772 18:29:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.772 18:29:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.772 18:29:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 18:29:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 18:29:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 18:29:59 -- paths/export.sh@5 -- # export PATH 00:13:52.772 18:29:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 18:29:59 -- nvmf/common.sh@46 -- # : 0 00:13:52.772 18:29:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.772 18:29:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.772 18:29:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.772 18:29:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.772 18:29:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.772 18:29:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.772 18:29:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.772 18:29:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.772 18:29:59 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:52.772 18:29:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.772 18:29:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.772 18:29:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.772 18:29:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.772 18:29:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.772 18:29:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.772 18:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.772 18:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.772 18:29:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:52.772 18:29:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:52.772 18:29:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:52.772 18:29:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:52.772 18:29:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:52.772 18:29:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:52.772 18:29:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.772 18:29:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.772 18:29:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:52.772 18:29:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:52.772 18:29:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.772 18:29:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.772 18:29:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.772 18:29:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.772 18:29:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.772 18:29:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.772 18:29:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.772 18:29:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.772 18:29:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:52.772 18:29:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:52.772 Cannot find device "nvmf_tgt_br" 00:13:52.772 18:29:59 -- nvmf/common.sh@154 -- # true 00:13:52.772 18:29:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.772 Cannot find device "nvmf_tgt_br2" 00:13:52.772 18:29:59 -- nvmf/common.sh@155 -- # true 00:13:52.772 18:29:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:52.772 18:29:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:52.772 Cannot find device "nvmf_tgt_br" 00:13:52.772 18:30:00 -- nvmf/common.sh@157 -- # true 00:13:52.772 18:30:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:52.772 Cannot find device "nvmf_tgt_br2" 00:13:52.772 18:30:00 -- nvmf/common.sh@158 -- # true 00:13:52.772 18:30:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:52.772 18:30:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:52.772 18:30:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.772 18:30:00 -- nvmf/common.sh@161 -- # true 00:13:52.772 18:30:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.772 18:30:00 -- nvmf/common.sh@162 -- # true 00:13:52.773 18:30:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.773 18:30:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.773 18:30:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.773 18:30:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.773 18:30:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.773 18:30:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.773 18:30:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.773 18:30:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:52.773 18:30:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:52.773 18:30:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:52.773 18:30:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:52.773 18:30:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:52.773 18:30:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:52.773 18:30:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.773 18:30:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:53.031 18:30:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:53.031 18:30:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:53.031 18:30:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:53.031 18:30:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:53.031 18:30:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:53.031 18:30:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:53.031 18:30:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:53.031 18:30:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:53.031 18:30:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:53.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:53.031 00:13:53.031 --- 10.0.0.2 ping statistics --- 00:13:53.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.031 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:53.031 18:30:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:53.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:53.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:13:53.031 00:13:53.031 --- 10.0.0.3 ping statistics --- 00:13:53.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.031 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:53.031 18:30:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:53.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:53.031 00:13:53.031 --- 10.0.0.1 ping statistics --- 00:13:53.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.031 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:53.031 18:30:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.031 18:30:00 -- nvmf/common.sh@421 -- # return 0 00:13:53.031 18:30:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:53.031 18:30:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.031 18:30:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:53.031 18:30:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:53.031 18:30:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.031 18:30:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:53.031 18:30:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:53.031 18:30:00 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:53.031 18:30:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:53.031 18:30:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:53.031 18:30:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.031 18:30:00 -- nvmf/common.sh@469 -- # nvmfpid=82173 00:13:53.031 18:30:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:53.031 18:30:00 -- nvmf/common.sh@470 -- # waitforlisten 82173 00:13:53.031 18:30:00 -- common/autotest_common.sh@819 -- # '[' -z 82173 ']' 00:13:53.031 18:30:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.031 18:30:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.031 18:30:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.031 18:30:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.031 18:30:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.031 [2024-07-14 18:30:00.366918] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:53.031 [2024-07-14 18:30:00.367375] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.289 [2024-07-14 18:30:00.512781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.289 [2024-07-14 18:30:00.594795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:53.289 [2024-07-14 18:30:00.595140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.289 [2024-07-14 18:30:00.595212] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.289 [2024-07-14 18:30:00.595513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.289 [2024-07-14 18:30:00.595884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.289 [2024-07-14 18:30:00.595892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.223 18:30:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.223 18:30:01 -- common/autotest_common.sh@852 -- # return 0 00:13:54.223 18:30:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:54.223 18:30:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:30:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 [2024-07-14 18:30:01.406792] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 [2024-07-14 18:30:01.422907] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 NULL1 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 Delay0 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.223 18:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.223 18:30:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@28 -- # perf_pid=82224 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:54.223 18:30:01 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:54.223 [2024-07-14 18:30:01.617821] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:56.133 18:30:03 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.133 18:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.133 18:30:03 -- common/autotest_common.sh@10 -- # set +x 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Write completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 Read completed with error (sct=0, sc=8) 00:13:56.403 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 [2024-07-14 18:30:03.653798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191ce60 is same with the state(5) to be set 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 starting I/O failed: -6 00:13:56.404 [2024-07-14 18:30:03.656207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f17fc000c00 is same with the state(5) to be set 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Write completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:56.404 Read completed with error (sct=0, sc=8) 00:13:57.338 [2024-07-14 18:30:04.632174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1920460 is same with the state(5) to be set 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 [2024-07-14 18:30:04.651731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f17fc00bf20 is same with the state(5) to be set 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 [2024-07-14 18:30:04.651935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f17fc00c600 is same with the state(5) to be set 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 [2024-07-14 18:30:04.655826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d230 is same with the state(5) to be set 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Write completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 Read completed with error (sct=0, sc=8) 00:13:57.338 [2024-07-14 18:30:04.657845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d790 is same with the state(5) to be set 00:13:57.338 [2024-07-14 18:30:04.658670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920460 (9): Bad file descriptor 00:13:57.338 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:57.338 18:30:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.338 18:30:04 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:57.338 18:30:04 -- target/delete_subsystem.sh@35 -- # kill -0 82224 00:13:57.338 18:30:04 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:57.338 Initializing NVMe Controllers 00:13:57.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.338 Controller IO queue size 128, less than required. 00:13:57.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:57.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:57.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:57.338 Initialization complete. Launching workers. 00:13:57.338 ======================================================== 00:13:57.338 Latency(us) 00:13:57.338 Device Information : IOPS MiB/s Average min max 00:13:57.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.73 0.09 897157.46 532.73 1011915.64 00:13:57.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 143.04 0.07 965065.71 750.07 1013531.77 00:13:57.338 ======================================================== 00:13:57.338 Total : 331.76 0.16 926435.27 532.73 1013531.77 00:13:57.338 00:13:57.904 18:30:05 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:57.904 18:30:05 -- target/delete_subsystem.sh@35 -- # kill -0 82224 00:13:57.904 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82224) - No such process 00:13:57.904 18:30:05 -- target/delete_subsystem.sh@45 -- # NOT wait 82224 00:13:57.904 18:30:05 -- common/autotest_common.sh@640 -- # local es=0 00:13:57.904 18:30:05 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 82224 00:13:57.904 18:30:05 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:57.904 18:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:57.904 18:30:05 -- common/autotest_common.sh@632 -- # type -t wait 00:13:57.904 18:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:57.904 18:30:05 -- common/autotest_common.sh@643 -- # wait 82224 00:13:57.904 18:30:05 -- common/autotest_common.sh@643 -- # es=1 00:13:57.905 18:30:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:57.905 18:30:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:57.905 18:30:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.905 18:30:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.905 18:30:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.905 18:30:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.905 18:30:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.905 18:30:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.905 [2024-07-14 18:30:05.183695] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.905 18:30:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.905 18:30:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.905 18:30:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.905 18:30:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@54 -- # perf_pid=82275 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:13:57.905 18:30:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.163 [2024-07-14 18:30:05.352602] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:58.421 18:30:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.421 18:30:05 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:13:58.421 18:30:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.987 18:30:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.987 18:30:06 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:13:58.987 18:30:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.555 18:30:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.555 18:30:06 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:13:59.555 18:30:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.814 18:30:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.814 18:30:07 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:13:59.814 18:30:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.381 18:30:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.381 18:30:07 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:14:00.381 18:30:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.948 18:30:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.948 18:30:08 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:14:00.948 18:30:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.207 Initializing NVMe Controllers 00:14:01.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.207 Controller IO queue size 128, less than required. 00:14:01.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:01.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:01.207 Initialization complete. Launching workers. 00:14:01.207 ======================================================== 00:14:01.207 Latency(us) 00:14:01.207 Device Information : IOPS MiB/s Average min max 00:14:01.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003264.04 1000245.14 1011511.60 00:14:01.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005632.66 1000132.41 1015504.24 00:14:01.207 ======================================================== 00:14:01.207 Total : 256.00 0.12 1004448.35 1000132.41 1015504.24 00:14:01.207 00:14:01.468 18:30:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.468 18:30:08 -- target/delete_subsystem.sh@57 -- # kill -0 82275 00:14:01.468 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82275) - No such process 00:14:01.468 18:30:08 -- target/delete_subsystem.sh@67 -- # wait 82275 00:14:01.468 18:30:08 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:01.468 18:30:08 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:01.468 18:30:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:01.468 18:30:08 -- nvmf/common.sh@116 -- # sync 00:14:01.468 18:30:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:01.468 18:30:08 -- nvmf/common.sh@119 -- # set +e 00:14:01.468 18:30:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:01.468 18:30:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:01.468 rmmod nvme_tcp 00:14:01.468 rmmod nvme_fabrics 00:14:01.468 rmmod nvme_keyring 00:14:01.468 18:30:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:01.468 18:30:08 -- nvmf/common.sh@123 -- # set -e 00:14:01.468 18:30:08 -- nvmf/common.sh@124 -- # return 0 00:14:01.468 18:30:08 -- nvmf/common.sh@477 -- # '[' -n 82173 ']' 00:14:01.468 18:30:08 -- nvmf/common.sh@478 -- # killprocess 82173 00:14:01.468 18:30:08 -- common/autotest_common.sh@926 -- # '[' -z 82173 ']' 00:14:01.468 18:30:08 -- common/autotest_common.sh@930 -- # kill -0 82173 00:14:01.468 18:30:08 -- common/autotest_common.sh@931 -- # uname 00:14:01.468 18:30:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:01.468 18:30:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82173 00:14:01.468 18:30:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:01.468 18:30:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:01.468 killing process with pid 82173 00:14:01.468 18:30:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82173' 00:14:01.468 18:30:08 -- common/autotest_common.sh@945 -- # kill 82173 00:14:01.468 18:30:08 -- common/autotest_common.sh@950 -- # wait 82173 00:14:01.726 18:30:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:01.727 18:30:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:01.727 18:30:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:01.727 18:30:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.727 18:30:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:01.727 18:30:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.727 18:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.727 18:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.727 18:30:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:01.727 ************************************ 00:14:01.727 END TEST nvmf_delete_subsystem 00:14:01.727 ************************************ 00:14:01.727 00:14:01.727 real 0m9.250s 00:14:01.727 user 0m28.908s 00:14:01.727 sys 0m1.233s 00:14:01.727 18:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.727 18:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:01.727 18:30:09 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:01.727 18:30:09 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:01.727 18:30:09 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.727 18:30:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:01.727 18:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.727 18:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:01.984 ************************************ 00:14:01.984 START TEST nvmf_host_management 00:14:01.984 ************************************ 00:14:01.984 18:30:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.984 * Looking for test storage... 00:14:01.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:01.984 18:30:09 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.984 18:30:09 -- nvmf/common.sh@7 -- # uname -s 00:14:01.984 18:30:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.984 18:30:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.984 18:30:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.984 18:30:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.984 18:30:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.984 18:30:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.984 18:30:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.984 18:30:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.984 18:30:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.984 18:30:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.984 18:30:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:14:01.984 18:30:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:14:01.984 18:30:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.984 18:30:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.984 18:30:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:01.984 18:30:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.984 18:30:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.984 18:30:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.984 18:30:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.984 18:30:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.984 18:30:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.984 18:30:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.984 18:30:09 -- paths/export.sh@5 -- # export PATH 00:14:01.984 18:30:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.984 18:30:09 -- nvmf/common.sh@46 -- # : 0 00:14:01.984 18:30:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:01.984 18:30:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:01.984 18:30:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:01.984 18:30:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.984 18:30:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.984 18:30:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:01.985 18:30:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:01.985 18:30:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:01.985 18:30:09 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.985 18:30:09 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.985 18:30:09 -- target/host_management.sh@104 -- # nvmftestinit 00:14:01.985 18:30:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:01.985 18:30:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.985 18:30:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:01.985 18:30:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:01.985 18:30:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:01.985 18:30:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.985 18:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.985 18:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.985 18:30:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:01.985 18:30:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:01.985 18:30:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:01.985 18:30:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:01.985 18:30:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:01.985 18:30:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:01.985 18:30:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.985 18:30:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.985 18:30:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:01.985 18:30:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:01.985 18:30:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:01.985 18:30:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:01.985 18:30:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:01.985 18:30:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.985 18:30:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:01.985 18:30:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:01.985 18:30:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:01.985 18:30:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:01.985 18:30:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:01.985 18:30:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:01.985 Cannot find device "nvmf_tgt_br" 00:14:01.985 18:30:09 -- nvmf/common.sh@154 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.985 Cannot find device "nvmf_tgt_br2" 00:14:01.985 18:30:09 -- nvmf/common.sh@155 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:01.985 18:30:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:01.985 Cannot find device "nvmf_tgt_br" 00:14:01.985 18:30:09 -- nvmf/common.sh@157 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:01.985 Cannot find device "nvmf_tgt_br2" 00:14:01.985 18:30:09 -- nvmf/common.sh@158 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:01.985 18:30:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:01.985 18:30:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.985 18:30:09 -- nvmf/common.sh@161 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.985 18:30:09 -- nvmf/common.sh@162 -- # true 00:14:01.985 18:30:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:01.985 18:30:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:01.985 18:30:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:01.985 18:30:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:01.985 18:30:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.243 18:30:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.243 18:30:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.243 18:30:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:02.243 18:30:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:02.243 18:30:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:02.243 18:30:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:02.243 18:30:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:02.243 18:30:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:02.243 18:30:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.243 18:30:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.243 18:30:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.243 18:30:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:02.243 18:30:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:02.243 18:30:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.243 18:30:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.243 18:30:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.243 18:30:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.243 18:30:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.243 18:30:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:02.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:14:02.243 00:14:02.243 --- 10.0.0.2 ping statistics --- 00:14:02.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.243 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:02.243 18:30:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:02.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:02.243 00:14:02.243 --- 10.0.0.3 ping statistics --- 00:14:02.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.243 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:02.243 18:30:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:02.243 00:14:02.243 --- 10.0.0.1 ping statistics --- 00:14:02.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.243 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:02.243 18:30:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.243 18:30:09 -- nvmf/common.sh@421 -- # return 0 00:14:02.243 18:30:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:02.243 18:30:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.243 18:30:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:02.243 18:30:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:02.243 18:30:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.243 18:30:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:02.243 18:30:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:02.243 18:30:09 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:02.243 18:30:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:02.243 18:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.243 18:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:02.243 ************************************ 00:14:02.243 START TEST nvmf_host_management 00:14:02.243 ************************************ 00:14:02.243 18:30:09 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:02.243 18:30:09 -- target/host_management.sh@69 -- # starttarget 00:14:02.243 18:30:09 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:02.243 18:30:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:02.243 18:30:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:02.243 18:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:02.243 18:30:09 -- nvmf/common.sh@469 -- # nvmfpid=82510 00:14:02.243 18:30:09 -- nvmf/common.sh@470 -- # waitforlisten 82510 00:14:02.243 18:30:09 -- common/autotest_common.sh@819 -- # '[' -z 82510 ']' 00:14:02.243 18:30:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.243 18:30:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.243 18:30:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.243 18:30:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:02.243 18:30:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.243 18:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:02.243 [2024-07-14 18:30:09.654029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:02.243 [2024-07-14 18:30:09.654120] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.501 [2024-07-14 18:30:09.793601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.501 [2024-07-14 18:30:09.858168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:02.501 [2024-07-14 18:30:09.858311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.501 [2024-07-14 18:30:09.858323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.501 [2024-07-14 18:30:09.858330] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.501 [2024-07-14 18:30:09.858470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.501 [2024-07-14 18:30:09.859229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.501 [2024-07-14 18:30:09.859409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.501 [2024-07-14 18:30:09.859561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.435 18:30:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:03.435 18:30:10 -- common/autotest_common.sh@852 -- # return 0 00:14:03.435 18:30:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:03.435 18:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 18:30:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.435 18:30:10 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.435 18:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 [2024-07-14 18:30:10.639643] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.435 18:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.435 18:30:10 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:03.435 18:30:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 18:30:10 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:03.435 18:30:10 -- target/host_management.sh@23 -- # cat 00:14:03.435 18:30:10 -- target/host_management.sh@30 -- # rpc_cmd 00:14:03.435 18:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 Malloc0 00:14:03.435 [2024-07-14 18:30:10.724764] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.435 18:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.435 18:30:10 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:03.435 18:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 18:30:10 -- target/host_management.sh@73 -- # perfpid=82583 00:14:03.435 18:30:10 -- target/host_management.sh@74 -- # waitforlisten 82583 /var/tmp/bdevperf.sock 00:14:03.435 18:30:10 -- common/autotest_common.sh@819 -- # '[' -z 82583 ']' 00:14:03.435 18:30:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.435 18:30:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:03.435 18:30:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.435 18:30:10 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:03.435 18:30:10 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:03.435 18:30:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:03.435 18:30:10 -- nvmf/common.sh@520 -- # config=() 00:14:03.435 18:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 18:30:10 -- nvmf/common.sh@520 -- # local subsystem config 00:14:03.435 18:30:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:03.435 18:30:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:03.435 { 00:14:03.435 "params": { 00:14:03.435 "name": "Nvme$subsystem", 00:14:03.435 "trtype": "$TEST_TRANSPORT", 00:14:03.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.435 "adrfam": "ipv4", 00:14:03.435 "trsvcid": "$NVMF_PORT", 00:14:03.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.435 "hdgst": ${hdgst:-false}, 00:14:03.435 "ddgst": ${ddgst:-false} 00:14:03.435 }, 00:14:03.435 "method": "bdev_nvme_attach_controller" 00:14:03.435 } 00:14:03.435 EOF 00:14:03.435 )") 00:14:03.435 18:30:10 -- nvmf/common.sh@542 -- # cat 00:14:03.435 18:30:10 -- nvmf/common.sh@544 -- # jq . 00:14:03.435 18:30:10 -- nvmf/common.sh@545 -- # IFS=, 00:14:03.435 18:30:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:03.435 "params": { 00:14:03.435 "name": "Nvme0", 00:14:03.435 "trtype": "tcp", 00:14:03.435 "traddr": "10.0.0.2", 00:14:03.435 "adrfam": "ipv4", 00:14:03.435 "trsvcid": "4420", 00:14:03.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:03.435 "hdgst": false, 00:14:03.435 "ddgst": false 00:14:03.435 }, 00:14:03.435 "method": "bdev_nvme_attach_controller" 00:14:03.435 }' 00:14:03.435 [2024-07-14 18:30:10.830979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:03.435 [2024-07-14 18:30:10.831068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82583 ] 00:14:03.694 [2024-07-14 18:30:10.975882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.694 [2024-07-14 18:30:11.045649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.953 Running I/O for 10 seconds... 00:14:04.524 18:30:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:04.524 18:30:11 -- common/autotest_common.sh@852 -- # return 0 00:14:04.524 18:30:11 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:04.524 18:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.524 18:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:04.524 18:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.524 18:30:11 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.524 18:30:11 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:04.524 18:30:11 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:04.524 18:30:11 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:04.524 18:30:11 -- target/host_management.sh@52 -- # local ret=1 00:14:04.524 18:30:11 -- target/host_management.sh@53 -- # local i 00:14:04.524 18:30:11 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:04.524 18:30:11 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:04.524 18:30:11 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:04.524 18:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.524 18:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:04.524 18:30:11 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:04.524 18:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.524 18:30:11 -- target/host_management.sh@55 -- # read_io_count=2002 00:14:04.524 18:30:11 -- target/host_management.sh@58 -- # '[' 2002 -ge 100 ']' 00:14:04.524 18:30:11 -- target/host_management.sh@59 -- # ret=0 00:14:04.524 18:30:11 -- target/host_management.sh@60 -- # break 00:14:04.524 18:30:11 -- target/host_management.sh@64 -- # return 0 00:14:04.524 18:30:11 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.524 18:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.524 18:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:04.524 [2024-07-14 18:30:11.870822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.524 [2024-07-14 18:30:11.870985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.870993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.871337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645880 is same with the state(5) to be set 00:14:04.525 [2024-07-14 18:30:11.874163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.525 [2024-07-14 18:30:11.874441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.525 [2024-07-14 18:30:11.874451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.874983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.874994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.526 [2024-07-14 18:30:11.875181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.526 [2024-07-14 18:30:11.875190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 18:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.527 [2024-07-14 18:30:11.875250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 18:30:11 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.527 [2024-07-14 18:30:11.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 [2024-07-14 18:30:11.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.527 18:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.527 [2024-07-14 18:30:11.875530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.527 [2024-07-14 18:30:11.875618] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7250c0 was disconnected and freed. reset controller. 00:14:04.527 18:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:04.527 task offset: 20224 on job bdev=Nvme0n1 fails 00:14:04.527 00:14:04.527 Latency(us) 00:14:04.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.527 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:04.527 Job: Nvme0n1 ended in about 0.65 seconds with error 00:14:04.527 Verification LBA range: start 0x0 length 0x400 00:14:04.527 Nvme0n1 : 0.65 3385.70 211.61 98.81 0.00 18011.54 1832.03 29669.93 00:14:04.527 =================================================================================================================== 00:14:04.527 Total : 3385.70 211.61 98.81 0.00 18011.54 1832.03 29669.93 00:14:04.527 [2024-07-14 18:30:11.876755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:04.527 [2024-07-14 18:30:11.878762] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.527 [2024-07-14 18:30:11.878787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7276d0 (9): Bad file descriptor 00:14:04.527 18:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.527 18:30:11 -- target/host_management.sh@87 -- # sleep 1 00:14:04.527 [2024-07-14 18:30:11.887799] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.903 18:30:12 -- target/host_management.sh@91 -- # kill -9 82583 00:14:05.903 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82583) - No such process 00:14:05.903 18:30:12 -- target/host_management.sh@91 -- # true 00:14:05.903 18:30:12 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:05.903 18:30:12 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:05.903 18:30:12 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:05.903 18:30:12 -- nvmf/common.sh@520 -- # config=() 00:14:05.903 18:30:12 -- nvmf/common.sh@520 -- # local subsystem config 00:14:05.903 18:30:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:05.903 18:30:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:05.903 { 00:14:05.903 "params": { 00:14:05.903 "name": "Nvme$subsystem", 00:14:05.903 "trtype": "$TEST_TRANSPORT", 00:14:05.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.903 "adrfam": "ipv4", 00:14:05.903 "trsvcid": "$NVMF_PORT", 00:14:05.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.903 "hdgst": ${hdgst:-false}, 00:14:05.903 "ddgst": ${ddgst:-false} 00:14:05.903 }, 00:14:05.903 "method": "bdev_nvme_attach_controller" 00:14:05.903 } 00:14:05.903 EOF 00:14:05.903 )") 00:14:05.903 18:30:12 -- nvmf/common.sh@542 -- # cat 00:14:05.903 18:30:12 -- nvmf/common.sh@544 -- # jq . 00:14:05.903 18:30:12 -- nvmf/common.sh@545 -- # IFS=, 00:14:05.903 18:30:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:05.903 "params": { 00:14:05.903 "name": "Nvme0", 00:14:05.903 "trtype": "tcp", 00:14:05.903 "traddr": "10.0.0.2", 00:14:05.903 "adrfam": "ipv4", 00:14:05.903 "trsvcid": "4420", 00:14:05.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:05.903 "hdgst": false, 00:14:05.903 "ddgst": false 00:14:05.903 }, 00:14:05.903 "method": "bdev_nvme_attach_controller" 00:14:05.903 }' 00:14:05.903 [2024-07-14 18:30:12.944164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:05.903 [2024-07-14 18:30:12.944244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:14:05.903 [2024-07-14 18:30:13.080168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.903 [2024-07-14 18:30:13.167286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.161 Running I/O for 1 seconds... 00:14:07.095 00:14:07.096 Latency(us) 00:14:07.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.096 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.096 Verification LBA range: start 0x0 length 0x400 00:14:07.096 Nvme0n1 : 1.01 3546.04 221.63 0.00 0.00 17745.49 1385.19 24427.05 00:14:07.096 =================================================================================================================== 00:14:07.096 Total : 3546.04 221.63 0.00 0.00 17745.49 1385.19 24427.05 00:14:07.354 18:30:14 -- target/host_management.sh@101 -- # stoptarget 00:14:07.354 18:30:14 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:07.354 18:30:14 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:07.354 18:30:14 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:07.354 18:30:14 -- target/host_management.sh@40 -- # nvmftestfini 00:14:07.354 18:30:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:07.354 18:30:14 -- nvmf/common.sh@116 -- # sync 00:14:07.354 18:30:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:07.354 18:30:14 -- nvmf/common.sh@119 -- # set +e 00:14:07.354 18:30:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:07.354 18:30:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:07.354 rmmod nvme_tcp 00:14:07.354 rmmod nvme_fabrics 00:14:07.354 rmmod nvme_keyring 00:14:07.354 18:30:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:07.354 18:30:14 -- nvmf/common.sh@123 -- # set -e 00:14:07.354 18:30:14 -- nvmf/common.sh@124 -- # return 0 00:14:07.354 18:30:14 -- nvmf/common.sh@477 -- # '[' -n 82510 ']' 00:14:07.354 18:30:14 -- nvmf/common.sh@478 -- # killprocess 82510 00:14:07.354 18:30:14 -- common/autotest_common.sh@926 -- # '[' -z 82510 ']' 00:14:07.354 18:30:14 -- common/autotest_common.sh@930 -- # kill -0 82510 00:14:07.354 18:30:14 -- common/autotest_common.sh@931 -- # uname 00:14:07.354 18:30:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.354 18:30:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82510 00:14:07.354 18:30:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:07.354 18:30:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:07.354 killing process with pid 82510 00:14:07.354 18:30:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82510' 00:14:07.354 18:30:14 -- common/autotest_common.sh@945 -- # kill 82510 00:14:07.354 18:30:14 -- common/autotest_common.sh@950 -- # wait 82510 00:14:07.613 [2024-07-14 18:30:14.923903] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:07.613 18:30:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.613 18:30:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.613 18:30:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.613 18:30:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.613 18:30:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.613 18:30:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.613 18:30:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.613 18:30:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.613 18:30:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:07.613 00:14:07.613 real 0m5.393s 00:14:07.613 user 0m22.600s 00:14:07.613 sys 0m1.277s 00:14:07.613 18:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.613 18:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:07.613 ************************************ 00:14:07.613 END TEST nvmf_host_management 00:14:07.613 ************************************ 00:14:07.613 18:30:15 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:07.613 00:14:07.613 real 0m5.874s 00:14:07.613 user 0m22.718s 00:14:07.613 sys 0m1.519s 00:14:07.613 18:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.613 18:30:15 -- common/autotest_common.sh@10 -- # set +x 00:14:07.613 ************************************ 00:14:07.613 END TEST nvmf_host_management 00:14:07.613 ************************************ 00:14:07.872 18:30:15 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:07.872 18:30:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:07.872 18:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.872 18:30:15 -- common/autotest_common.sh@10 -- # set +x 00:14:07.872 ************************************ 00:14:07.872 START TEST nvmf_lvol 00:14:07.872 ************************************ 00:14:07.872 18:30:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:07.872 * Looking for test storage... 00:14:07.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.872 18:30:15 -- nvmf/common.sh@7 -- # uname -s 00:14:07.872 18:30:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.872 18:30:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.872 18:30:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.872 18:30:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.872 18:30:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.872 18:30:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.872 18:30:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.872 18:30:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.872 18:30:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.872 18:30:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:14:07.872 18:30:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:14:07.872 18:30:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.872 18:30:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.872 18:30:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.872 18:30:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.872 18:30:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.872 18:30:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.872 18:30:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.872 18:30:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.872 18:30:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.872 18:30:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.872 18:30:15 -- paths/export.sh@5 -- # export PATH 00:14:07.872 18:30:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.872 18:30:15 -- nvmf/common.sh@46 -- # : 0 00:14:07.872 18:30:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:07.872 18:30:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:07.872 18:30:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:07.872 18:30:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.872 18:30:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.872 18:30:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:07.872 18:30:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:07.872 18:30:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.872 18:30:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:07.872 18:30:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:07.872 18:30:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.872 18:30:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:07.872 18:30:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:07.872 18:30:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:07.872 18:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.872 18:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.872 18:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.872 18:30:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:07.872 18:30:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:07.872 18:30:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.872 18:30:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.872 18:30:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.872 18:30:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:07.872 18:30:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.872 18:30:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.872 18:30:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.872 18:30:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.872 18:30:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.873 18:30:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.873 18:30:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.873 18:30:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.873 18:30:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:07.873 18:30:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:07.873 Cannot find device "nvmf_tgt_br" 00:14:07.873 18:30:15 -- nvmf/common.sh@154 -- # true 00:14:07.873 18:30:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.873 Cannot find device "nvmf_tgt_br2" 00:14:07.873 18:30:15 -- nvmf/common.sh@155 -- # true 00:14:07.873 18:30:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:07.873 18:30:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:07.873 Cannot find device "nvmf_tgt_br" 00:14:07.873 18:30:15 -- nvmf/common.sh@157 -- # true 00:14:07.873 18:30:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:07.873 Cannot find device "nvmf_tgt_br2" 00:14:07.873 18:30:15 -- nvmf/common.sh@158 -- # true 00:14:07.873 18:30:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:07.873 18:30:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:07.873 18:30:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.131 18:30:15 -- nvmf/common.sh@161 -- # true 00:14:08.131 18:30:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.131 18:30:15 -- nvmf/common.sh@162 -- # true 00:14:08.131 18:30:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.131 18:30:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.131 18:30:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.131 18:30:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.131 18:30:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.131 18:30:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.131 18:30:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.131 18:30:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.131 18:30:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.131 18:30:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:08.131 18:30:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:08.131 18:30:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:08.131 18:30:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:08.131 18:30:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.131 18:30:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.131 18:30:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.131 18:30:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:08.131 18:30:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:08.131 18:30:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.131 18:30:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.131 18:30:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.131 18:30:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.131 18:30:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.131 18:30:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:08.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:08.131 00:14:08.131 --- 10.0.0.2 ping statistics --- 00:14:08.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.131 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:08.131 18:30:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:08.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:08.131 00:14:08.131 --- 10.0.0.3 ping statistics --- 00:14:08.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.131 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:08.131 18:30:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:08.131 00:14:08.131 --- 10.0.0.1 ping statistics --- 00:14:08.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.131 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:08.131 18:30:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.131 18:30:15 -- nvmf/common.sh@421 -- # return 0 00:14:08.131 18:30:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:08.131 18:30:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.131 18:30:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:08.132 18:30:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:08.132 18:30:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.132 18:30:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:08.132 18:30:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:08.132 18:30:15 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:08.132 18:30:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:08.132 18:30:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:08.132 18:30:15 -- common/autotest_common.sh@10 -- # set +x 00:14:08.132 18:30:15 -- nvmf/common.sh@469 -- # nvmfpid=82857 00:14:08.132 18:30:15 -- nvmf/common.sh@470 -- # waitforlisten 82857 00:14:08.132 18:30:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:08.132 18:30:15 -- common/autotest_common.sh@819 -- # '[' -z 82857 ']' 00:14:08.132 18:30:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.132 18:30:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:08.132 18:30:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.132 18:30:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:08.132 18:30:15 -- common/autotest_common.sh@10 -- # set +x 00:14:08.390 [2024-07-14 18:30:15.575982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:08.390 [2024-07-14 18:30:15.576101] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.390 [2024-07-14 18:30:15.715812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.390 [2024-07-14 18:30:15.786144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:08.390 [2024-07-14 18:30:15.786294] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.390 [2024-07-14 18:30:15.786312] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.390 [2024-07-14 18:30:15.786320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.390 [2024-07-14 18:30:15.786560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.390 [2024-07-14 18:30:15.786692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.390 [2024-07-14 18:30:15.786700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.326 18:30:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:09.326 18:30:16 -- common/autotest_common.sh@852 -- # return 0 00:14:09.326 18:30:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:09.326 18:30:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:09.326 18:30:16 -- common/autotest_common.sh@10 -- # set +x 00:14:09.326 18:30:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.326 18:30:16 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.584 [2024-07-14 18:30:16.824914] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.584 18:30:16 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.843 18:30:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:09.843 18:30:17 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:10.102 18:30:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:10.102 18:30:17 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:10.362 18:30:17 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:10.620 18:30:17 -- target/nvmf_lvol.sh@29 -- # lvs=ca8b6cb3-f6ad-4efb-8f14-40e9aebbfa58 00:14:10.620 18:30:17 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ca8b6cb3-f6ad-4efb-8f14-40e9aebbfa58 lvol 20 00:14:10.878 18:30:18 -- target/nvmf_lvol.sh@32 -- # lvol=89fde571-f9ea-4300-a590-d2855d2e13ab 00:14:10.878 18:30:18 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.137 18:30:18 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89fde571-f9ea-4300-a590-d2855d2e13ab 00:14:11.394 18:30:18 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.394 [2024-07-14 18:30:18.773800] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.394 18:30:18 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.960 18:30:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=82999 00:14:11.960 18:30:19 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:11.960 18:30:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:12.944 18:30:20 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 89fde571-f9ea-4300-a590-d2855d2e13ab MY_SNAPSHOT 00:14:12.944 18:30:20 -- target/nvmf_lvol.sh@47 -- # snapshot=1143021c-f996-4cef-9a5f-b5adc4dbea34 00:14:12.944 18:30:20 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 89fde571-f9ea-4300-a590-d2855d2e13ab 30 00:14:13.509 18:30:20 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1143021c-f996-4cef-9a5f-b5adc4dbea34 MY_CLONE 00:14:13.766 18:30:20 -- target/nvmf_lvol.sh@49 -- # clone=e862baab-aa3e-4ce1-b8fd-5600bb090910 00:14:13.766 18:30:20 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e862baab-aa3e-4ce1-b8fd-5600bb090910 00:14:14.329 18:30:21 -- target/nvmf_lvol.sh@53 -- # wait 82999 00:14:22.457 Initializing NVMe Controllers 00:14:22.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:22.457 Controller IO queue size 128, less than required. 00:14:22.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:22.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:22.457 Initialization complete. Launching workers. 00:14:22.457 ======================================================== 00:14:22.457 Latency(us) 00:14:22.457 Device Information : IOPS MiB/s Average min max 00:14:22.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10492.17 40.99 12202.75 2121.21 81493.67 00:14:22.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10441.48 40.79 12266.23 3256.27 60814.57 00:14:22.457 ======================================================== 00:14:22.457 Total : 20933.65 81.77 12234.41 2121.21 81493.67 00:14:22.457 00:14:22.457 18:30:29 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.457 18:30:29 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 89fde571-f9ea-4300-a590-d2855d2e13ab 00:14:22.764 18:30:29 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ca8b6cb3-f6ad-4efb-8f14-40e9aebbfa58 00:14:22.764 18:30:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:22.764 18:30:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:22.764 18:30:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:22.764 18:30:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.764 18:30:30 -- nvmf/common.sh@116 -- # sync 00:14:22.764 18:30:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:22.764 18:30:30 -- nvmf/common.sh@119 -- # set +e 00:14:22.764 18:30:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.764 18:30:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:22.764 rmmod nvme_tcp 00:14:22.764 rmmod nvme_fabrics 00:14:23.023 rmmod nvme_keyring 00:14:23.023 18:30:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:23.023 18:30:30 -- nvmf/common.sh@123 -- # set -e 00:14:23.023 18:30:30 -- nvmf/common.sh@124 -- # return 0 00:14:23.023 18:30:30 -- nvmf/common.sh@477 -- # '[' -n 82857 ']' 00:14:23.023 18:30:30 -- nvmf/common.sh@478 -- # killprocess 82857 00:14:23.023 18:30:30 -- common/autotest_common.sh@926 -- # '[' -z 82857 ']' 00:14:23.023 18:30:30 -- common/autotest_common.sh@930 -- # kill -0 82857 00:14:23.023 18:30:30 -- common/autotest_common.sh@931 -- # uname 00:14:23.023 18:30:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:23.023 18:30:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82857 00:14:23.023 18:30:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:23.023 18:30:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:23.023 18:30:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82857' 00:14:23.023 killing process with pid 82857 00:14:23.023 18:30:30 -- common/autotest_common.sh@945 -- # kill 82857 00:14:23.023 18:30:30 -- common/autotest_common.sh@950 -- # wait 82857 00:14:23.282 18:30:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:23.282 18:30:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:23.282 18:30:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:23.283 18:30:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:23.283 18:30:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.283 18:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.283 18:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.283 18:30:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:23.283 00:14:23.283 real 0m15.472s 00:14:23.283 user 1m4.913s 00:14:23.283 sys 0m3.780s 00:14:23.283 18:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.283 18:30:30 -- common/autotest_common.sh@10 -- # set +x 00:14:23.283 ************************************ 00:14:23.283 END TEST nvmf_lvol 00:14:23.283 ************************************ 00:14:23.283 18:30:30 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.283 18:30:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.283 18:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.283 18:30:30 -- common/autotest_common.sh@10 -- # set +x 00:14:23.283 ************************************ 00:14:23.283 START TEST nvmf_lvs_grow 00:14:23.283 ************************************ 00:14:23.283 18:30:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.283 * Looking for test storage... 00:14:23.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.283 18:30:30 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.283 18:30:30 -- nvmf/common.sh@7 -- # uname -s 00:14:23.283 18:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.283 18:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.283 18:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.283 18:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.283 18:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.283 18:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.283 18:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.283 18:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.283 18:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.283 18:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:14:23.283 18:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:14:23.283 18:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.283 18:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.283 18:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.283 18:30:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.283 18:30:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.283 18:30:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.283 18:30:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.283 18:30:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.283 18:30:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.283 18:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.283 18:30:30 -- paths/export.sh@5 -- # export PATH 00:14:23.283 18:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.283 18:30:30 -- nvmf/common.sh@46 -- # : 0 00:14:23.283 18:30:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.283 18:30:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.283 18:30:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.283 18:30:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.283 18:30:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.283 18:30:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.283 18:30:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.283 18:30:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.283 18:30:30 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.283 18:30:30 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.283 18:30:30 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:23.283 18:30:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.283 18:30:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.283 18:30:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.283 18:30:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.283 18:30:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.283 18:30:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.283 18:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.283 18:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.283 18:30:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:23.283 18:30:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:23.283 18:30:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.283 18:30:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.283 18:30:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.283 18:30:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:23.283 18:30:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.283 18:30:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.283 18:30:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.283 18:30:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.283 18:30:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.283 18:30:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.283 18:30:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.283 18:30:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.283 18:30:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:23.543 18:30:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:23.543 Cannot find device "nvmf_tgt_br" 00:14:23.543 18:30:30 -- nvmf/common.sh@154 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.543 Cannot find device "nvmf_tgt_br2" 00:14:23.543 18:30:30 -- nvmf/common.sh@155 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:23.543 18:30:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:23.543 Cannot find device "nvmf_tgt_br" 00:14:23.543 18:30:30 -- nvmf/common.sh@157 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:23.543 Cannot find device "nvmf_tgt_br2" 00:14:23.543 18:30:30 -- nvmf/common.sh@158 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:23.543 18:30:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:23.543 18:30:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.543 18:30:30 -- nvmf/common.sh@161 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.543 18:30:30 -- nvmf/common.sh@162 -- # true 00:14:23.543 18:30:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.543 18:30:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.543 18:30:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.543 18:30:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.543 18:30:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.543 18:30:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.543 18:30:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.543 18:30:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.543 18:30:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.543 18:30:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:23.543 18:30:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:23.543 18:30:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:23.543 18:30:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:23.543 18:30:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.543 18:30:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.543 18:30:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.543 18:30:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:23.543 18:30:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:23.543 18:30:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.801 18:30:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.801 18:30:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.801 18:30:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.801 18:30:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.801 18:30:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:23.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:23.801 00:14:23.801 --- 10.0.0.2 ping statistics --- 00:14:23.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.801 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:23.801 18:30:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:23.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:23.801 00:14:23.801 --- 10.0.0.3 ping statistics --- 00:14:23.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.801 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:23.801 18:30:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:23.801 00:14:23.801 --- 10.0.0.1 ping statistics --- 00:14:23.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.801 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:23.801 18:30:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.801 18:30:31 -- nvmf/common.sh@421 -- # return 0 00:14:23.801 18:30:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.801 18:30:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.801 18:30:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:23.801 18:30:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:23.801 18:30:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.801 18:30:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:23.801 18:30:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:23.801 18:30:31 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:23.801 18:30:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.801 18:30:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:23.801 18:30:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.801 18:30:31 -- nvmf/common.sh@469 -- # nvmfpid=83361 00:14:23.801 18:30:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:23.801 18:30:31 -- nvmf/common.sh@470 -- # waitforlisten 83361 00:14:23.801 18:30:31 -- common/autotest_common.sh@819 -- # '[' -z 83361 ']' 00:14:23.801 18:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.801 18:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.801 18:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.801 18:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.801 18:30:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.801 [2024-07-14 18:30:31.105779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:23.801 [2024-07-14 18:30:31.105856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.060 [2024-07-14 18:30:31.244927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.060 [2024-07-14 18:30:31.313213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:24.060 [2024-07-14 18:30:31.313377] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.060 [2024-07-14 18:30:31.313393] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.060 [2024-07-14 18:30:31.313405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.060 [2024-07-14 18:30:31.313434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.624 18:30:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.624 18:30:32 -- common/autotest_common.sh@852 -- # return 0 00:14:24.624 18:30:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:24.624 18:30:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:24.625 18:30:32 -- common/autotest_common.sh@10 -- # set +x 00:14:24.882 18:30:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.882 18:30:32 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.140 [2024-07-14 18:30:32.316984] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:25.140 18:30:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:25.140 18:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.140 18:30:32 -- common/autotest_common.sh@10 -- # set +x 00:14:25.140 ************************************ 00:14:25.140 START TEST lvs_grow_clean 00:14:25.140 ************************************ 00:14:25.140 18:30:32 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.140 18:30:32 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.397 18:30:32 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:25.397 18:30:32 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:25.656 18:30:32 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:25.656 18:30:32 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:25.656 18:30:32 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:25.914 18:30:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:25.914 18:30:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:25.914 18:30:33 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6994829d-6d42-4e8e-af75-b0e5ef522086 lvol 150 00:14:26.171 18:30:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=981871a3-0164-4760-aab4-3dbcdd796b8c 00:14:26.171 18:30:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:26.171 18:30:33 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:26.429 [2024-07-14 18:30:33.652214] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:26.429 [2024-07-14 18:30:33.652280] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:26.429 true 00:14:26.429 18:30:33 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:26.429 18:30:33 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:26.687 18:30:33 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:26.687 18:30:33 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:26.687 18:30:34 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 981871a3-0164-4760-aab4-3dbcdd796b8c 00:14:26.946 18:30:34 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:27.205 [2024-07-14 18:30:34.473672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.205 18:30:34 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.464 18:30:34 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:27.464 18:30:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83523 00:14:27.464 18:30:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:27.464 18:30:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83523 /var/tmp/bdevperf.sock 00:14:27.464 18:30:34 -- common/autotest_common.sh@819 -- # '[' -z 83523 ']' 00:14:27.464 18:30:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.464 18:30:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.464 18:30:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.464 18:30:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.464 18:30:34 -- common/autotest_common.sh@10 -- # set +x 00:14:27.464 [2024-07-14 18:30:34.798317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:27.464 [2024-07-14 18:30:34.798391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83523 ] 00:14:27.723 [2024-07-14 18:30:34.932559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.723 [2024-07-14 18:30:34.991858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.659 18:30:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.659 18:30:35 -- common/autotest_common.sh@852 -- # return 0 00:14:28.659 18:30:35 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:28.659 Nvme0n1 00:14:28.659 18:30:36 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:28.918 [ 00:14:28.918 { 00:14:28.918 "aliases": [ 00:14:28.918 "981871a3-0164-4760-aab4-3dbcdd796b8c" 00:14:28.918 ], 00:14:28.918 "assigned_rate_limits": { 00:14:28.918 "r_mbytes_per_sec": 0, 00:14:28.918 "rw_ios_per_sec": 0, 00:14:28.918 "rw_mbytes_per_sec": 0, 00:14:28.918 "w_mbytes_per_sec": 0 00:14:28.918 }, 00:14:28.918 "block_size": 4096, 00:14:28.918 "claimed": false, 00:14:28.918 "driver_specific": { 00:14:28.918 "mp_policy": "active_passive", 00:14:28.918 "nvme": [ 00:14:28.918 { 00:14:28.918 "ctrlr_data": { 00:14:28.918 "ana_reporting": false, 00:14:28.918 "cntlid": 1, 00:14:28.918 "firmware_revision": "24.01.1", 00:14:28.918 "model_number": "SPDK bdev Controller", 00:14:28.918 "multi_ctrlr": true, 00:14:28.918 "oacs": { 00:14:28.918 "firmware": 0, 00:14:28.918 "format": 0, 00:14:28.918 "ns_manage": 0, 00:14:28.918 "security": 0 00:14:28.918 }, 00:14:28.918 "serial_number": "SPDK0", 00:14:28.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:28.918 "vendor_id": "0x8086" 00:14:28.918 }, 00:14:28.918 "ns_data": { 00:14:28.918 "can_share": true, 00:14:28.918 "id": 1 00:14:28.918 }, 00:14:28.918 "trid": { 00:14:28.918 "adrfam": "IPv4", 00:14:28.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:28.918 "traddr": "10.0.0.2", 00:14:28.918 "trsvcid": "4420", 00:14:28.918 "trtype": "TCP" 00:14:28.918 }, 00:14:28.918 "vs": { 00:14:28.918 "nvme_version": "1.3" 00:14:28.918 } 00:14:28.918 } 00:14:28.918 ] 00:14:28.918 }, 00:14:28.918 "name": "Nvme0n1", 00:14:28.918 "num_blocks": 38912, 00:14:28.918 "product_name": "NVMe disk", 00:14:28.918 "supported_io_types": { 00:14:28.918 "abort": true, 00:14:28.918 "compare": true, 00:14:28.918 "compare_and_write": true, 00:14:28.918 "flush": true, 00:14:28.918 "nvme_admin": true, 00:14:28.919 "nvme_io": true, 00:14:28.919 "read": true, 00:14:28.919 "reset": true, 00:14:28.919 "unmap": true, 00:14:28.919 "write": true, 00:14:28.919 "write_zeroes": true 00:14:28.919 }, 00:14:28.919 "uuid": "981871a3-0164-4760-aab4-3dbcdd796b8c", 00:14:28.919 "zoned": false 00:14:28.919 } 00:14:28.919 ] 00:14:28.919 18:30:36 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.919 18:30:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83565 00:14:28.919 18:30:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:29.177 Running I/O for 10 seconds... 00:14:30.113 Latency(us) 00:14:30.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.113 Nvme0n1 : 1.00 7227.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:30.113 =================================================================================================================== 00:14:30.113 Total : 7227.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:30.113 00:14:31.048 18:30:38 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:31.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.048 Nvme0n1 : 2.00 7231.50 28.25 0.00 0.00 0.00 0.00 0.00 00:14:31.048 =================================================================================================================== 00:14:31.048 Total : 7231.50 28.25 0.00 0.00 0.00 0.00 0.00 00:14:31.048 00:14:31.307 true 00:14:31.307 18:30:38 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:31.307 18:30:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:31.565 18:30:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:31.566 18:30:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:31.566 18:30:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 83565 00:14:32.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.133 Nvme0n1 : 3.00 7231.67 28.25 0.00 0.00 0.00 0.00 0.00 00:14:32.133 =================================================================================================================== 00:14:32.133 Total : 7231.67 28.25 0.00 0.00 0.00 0.00 0.00 00:14:32.133 00:14:33.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.070 Nvme0n1 : 4.00 7263.50 28.37 0.00 0.00 0.00 0.00 0.00 00:14:33.070 =================================================================================================================== 00:14:33.070 Total : 7263.50 28.37 0.00 0.00 0.00 0.00 0.00 00:14:33.070 00:14:34.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.003 Nvme0n1 : 5.00 7249.00 28.32 0.00 0.00 0.00 0.00 0.00 00:14:34.003 =================================================================================================================== 00:14:34.003 Total : 7249.00 28.32 0.00 0.00 0.00 0.00 0.00 00:14:34.003 00:14:35.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.377 Nvme0n1 : 6.00 7226.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:35.377 =================================================================================================================== 00:14:35.377 Total : 7226.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:35.377 00:14:36.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.312 Nvme0n1 : 7.00 7205.43 28.15 0.00 0.00 0.00 0.00 0.00 00:14:36.312 =================================================================================================================== 00:14:36.312 Total : 7205.43 28.15 0.00 0.00 0.00 0.00 0.00 00:14:36.312 00:14:37.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.246 Nvme0n1 : 8.00 7183.25 28.06 0.00 0.00 0.00 0.00 0.00 00:14:37.246 =================================================================================================================== 00:14:37.246 Total : 7183.25 28.06 0.00 0.00 0.00 0.00 0.00 00:14:37.246 00:14:38.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.179 Nvme0n1 : 9.00 7174.11 28.02 0.00 0.00 0.00 0.00 0.00 00:14:38.179 =================================================================================================================== 00:14:38.179 Total : 7174.11 28.02 0.00 0.00 0.00 0.00 0.00 00:14:38.179 00:14:39.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.112 Nvme0n1 : 10.00 7160.10 27.97 0.00 0.00 0.00 0.00 0.00 00:14:39.112 =================================================================================================================== 00:14:39.112 Total : 7160.10 27.97 0.00 0.00 0.00 0.00 0.00 00:14:39.112 00:14:39.112 00:14:39.112 Latency(us) 00:14:39.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.112 Nvme0n1 : 10.01 7164.38 27.99 0.00 0.00 17854.15 7983.48 36700.16 00:14:39.112 =================================================================================================================== 00:14:39.112 Total : 7164.38 27.99 0.00 0.00 17854.15 7983.48 36700.16 00:14:39.112 0 00:14:39.112 18:30:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83523 00:14:39.112 18:30:46 -- common/autotest_common.sh@926 -- # '[' -z 83523 ']' 00:14:39.112 18:30:46 -- common/autotest_common.sh@930 -- # kill -0 83523 00:14:39.112 18:30:46 -- common/autotest_common.sh@931 -- # uname 00:14:39.112 18:30:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:39.112 18:30:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83523 00:14:39.112 18:30:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:39.112 18:30:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:39.112 killing process with pid 83523 00:14:39.112 18:30:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83523' 00:14:39.112 18:30:46 -- common/autotest_common.sh@945 -- # kill 83523 00:14:39.112 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.112 00:14:39.112 Latency(us) 00:14:39.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.112 =================================================================================================================== 00:14:39.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.112 18:30:46 -- common/autotest_common.sh@950 -- # wait 83523 00:14:39.370 18:30:46 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:39.628 18:30:46 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:39.628 18:30:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:39.886 18:30:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:39.886 18:30:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:39.886 18:30:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:40.144 [2024-07-14 18:30:47.345193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:40.144 18:30:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:40.144 18:30:47 -- common/autotest_common.sh@640 -- # local es=0 00:14:40.144 18:30:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:40.144 18:30:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.144 18:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:40.144 18:30:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.144 18:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:40.144 18:30:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.144 18:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:40.144 18:30:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.144 18:30:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:40.144 18:30:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:40.402 2024/07/14 18:30:47 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6994829d-6d42-4e8e-af75-b0e5ef522086], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:40.402 request: 00:14:40.402 { 00:14:40.402 "method": "bdev_lvol_get_lvstores", 00:14:40.402 "params": { 00:14:40.402 "uuid": "6994829d-6d42-4e8e-af75-b0e5ef522086" 00:14:40.402 } 00:14:40.402 } 00:14:40.402 Got JSON-RPC error response 00:14:40.402 GoRPCClient: error on JSON-RPC call 00:14:40.402 18:30:47 -- common/autotest_common.sh@643 -- # es=1 00:14:40.402 18:30:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:40.402 18:30:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:40.402 18:30:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:40.402 18:30:47 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:40.402 aio_bdev 00:14:40.661 18:30:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 981871a3-0164-4760-aab4-3dbcdd796b8c 00:14:40.661 18:30:47 -- common/autotest_common.sh@887 -- # local bdev_name=981871a3-0164-4760-aab4-3dbcdd796b8c 00:14:40.661 18:30:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:40.661 18:30:47 -- common/autotest_common.sh@889 -- # local i 00:14:40.661 18:30:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:40.661 18:30:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:40.661 18:30:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:40.661 18:30:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 981871a3-0164-4760-aab4-3dbcdd796b8c -t 2000 00:14:40.920 [ 00:14:40.920 { 00:14:40.920 "aliases": [ 00:14:40.920 "lvs/lvol" 00:14:40.920 ], 00:14:40.920 "assigned_rate_limits": { 00:14:40.920 "r_mbytes_per_sec": 0, 00:14:40.920 "rw_ios_per_sec": 0, 00:14:40.920 "rw_mbytes_per_sec": 0, 00:14:40.920 "w_mbytes_per_sec": 0 00:14:40.920 }, 00:14:40.920 "block_size": 4096, 00:14:40.920 "claimed": false, 00:14:40.920 "driver_specific": { 00:14:40.920 "lvol": { 00:14:40.920 "base_bdev": "aio_bdev", 00:14:40.920 "clone": false, 00:14:40.920 "esnap_clone": false, 00:14:40.920 "lvol_store_uuid": "6994829d-6d42-4e8e-af75-b0e5ef522086", 00:14:40.920 "snapshot": false, 00:14:40.920 "thin_provision": false 00:14:40.920 } 00:14:40.920 }, 00:14:40.920 "name": "981871a3-0164-4760-aab4-3dbcdd796b8c", 00:14:40.920 "num_blocks": 38912, 00:14:40.920 "product_name": "Logical Volume", 00:14:40.920 "supported_io_types": { 00:14:40.920 "abort": false, 00:14:40.920 "compare": false, 00:14:40.920 "compare_and_write": false, 00:14:40.920 "flush": false, 00:14:40.920 "nvme_admin": false, 00:14:40.920 "nvme_io": false, 00:14:40.920 "read": true, 00:14:40.920 "reset": true, 00:14:40.920 "unmap": true, 00:14:40.920 "write": true, 00:14:40.920 "write_zeroes": true 00:14:40.920 }, 00:14:40.920 "uuid": "981871a3-0164-4760-aab4-3dbcdd796b8c", 00:14:40.920 "zoned": false 00:14:40.920 } 00:14:40.920 ] 00:14:40.920 18:30:48 -- common/autotest_common.sh@895 -- # return 0 00:14:40.920 18:30:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:40.920 18:30:48 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:41.178 18:30:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:41.178 18:30:48 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:41.178 18:30:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:41.437 18:30:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:41.437 18:30:48 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 981871a3-0164-4760-aab4-3dbcdd796b8c 00:14:41.437 18:30:48 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6994829d-6d42-4e8e-af75-b0e5ef522086 00:14:41.695 18:30:49 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:41.953 18:30:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.530 00:14:42.530 real 0m17.288s 00:14:42.530 user 0m16.716s 00:14:42.530 sys 0m2.001s 00:14:42.530 18:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.530 18:30:49 -- common/autotest_common.sh@10 -- # set +x 00:14:42.530 ************************************ 00:14:42.530 END TEST lvs_grow_clean 00:14:42.530 ************************************ 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:42.530 18:30:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:42.530 18:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.530 18:30:49 -- common/autotest_common.sh@10 -- # set +x 00:14:42.530 ************************************ 00:14:42.530 START TEST lvs_grow_dirty 00:14:42.530 ************************************ 00:14:42.530 18:30:49 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.530 18:30:49 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:42.805 18:30:50 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:42.805 18:30:50 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:42.805 18:30:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:42.805 18:30:50 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:42.805 18:30:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:43.069 18:30:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:43.069 18:30:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:43.069 18:30:50 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 lvol 150 00:14:43.327 18:30:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:43.327 18:30:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:43.327 18:30:50 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:43.585 [2024-07-14 18:30:50.843315] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:43.585 [2024-07-14 18:30:50.843395] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:43.585 true 00:14:43.585 18:30:50 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:43.585 18:30:50 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:43.844 18:30:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:43.844 18:30:51 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.101 18:30:51 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:44.359 18:30:51 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:44.618 18:30:51 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.877 18:30:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83941 00:14:44.877 18:30:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.877 18:30:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83941 /var/tmp/bdevperf.sock 00:14:44.877 18:30:52 -- common/autotest_common.sh@819 -- # '[' -z 83941 ']' 00:14:44.877 18:30:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.877 18:30:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.877 18:30:52 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:44.877 18:30:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.877 18:30:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.877 18:30:52 -- common/autotest_common.sh@10 -- # set +x 00:14:44.877 [2024-07-14 18:30:52.085327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.877 [2024-07-14 18:30:52.085405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83941 ] 00:14:44.877 [2024-07-14 18:30:52.218481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.136 [2024-07-14 18:30:52.298953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.702 18:30:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:45.702 18:30:53 -- common/autotest_common.sh@852 -- # return 0 00:14:45.702 18:30:53 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:45.960 Nvme0n1 00:14:45.960 18:30:53 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:46.217 [ 00:14:46.217 { 00:14:46.217 "aliases": [ 00:14:46.217 "c3717eb0-bb2b-49cc-b85a-549a5daa56fb" 00:14:46.217 ], 00:14:46.217 "assigned_rate_limits": { 00:14:46.217 "r_mbytes_per_sec": 0, 00:14:46.217 "rw_ios_per_sec": 0, 00:14:46.217 "rw_mbytes_per_sec": 0, 00:14:46.217 "w_mbytes_per_sec": 0 00:14:46.217 }, 00:14:46.217 "block_size": 4096, 00:14:46.217 "claimed": false, 00:14:46.217 "driver_specific": { 00:14:46.217 "mp_policy": "active_passive", 00:14:46.217 "nvme": [ 00:14:46.217 { 00:14:46.217 "ctrlr_data": { 00:14:46.217 "ana_reporting": false, 00:14:46.217 "cntlid": 1, 00:14:46.217 "firmware_revision": "24.01.1", 00:14:46.217 "model_number": "SPDK bdev Controller", 00:14:46.217 "multi_ctrlr": true, 00:14:46.217 "oacs": { 00:14:46.217 "firmware": 0, 00:14:46.217 "format": 0, 00:14:46.217 "ns_manage": 0, 00:14:46.217 "security": 0 00:14:46.217 }, 00:14:46.217 "serial_number": "SPDK0", 00:14:46.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.217 "vendor_id": "0x8086" 00:14:46.217 }, 00:14:46.217 "ns_data": { 00:14:46.217 "can_share": true, 00:14:46.217 "id": 1 00:14:46.217 }, 00:14:46.217 "trid": { 00:14:46.217 "adrfam": "IPv4", 00:14:46.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.217 "traddr": "10.0.0.2", 00:14:46.217 "trsvcid": "4420", 00:14:46.217 "trtype": "TCP" 00:14:46.217 }, 00:14:46.217 "vs": { 00:14:46.217 "nvme_version": "1.3" 00:14:46.217 } 00:14:46.217 } 00:14:46.217 ] 00:14:46.217 }, 00:14:46.217 "name": "Nvme0n1", 00:14:46.217 "num_blocks": 38912, 00:14:46.217 "product_name": "NVMe disk", 00:14:46.217 "supported_io_types": { 00:14:46.217 "abort": true, 00:14:46.217 "compare": true, 00:14:46.217 "compare_and_write": true, 00:14:46.218 "flush": true, 00:14:46.218 "nvme_admin": true, 00:14:46.218 "nvme_io": true, 00:14:46.218 "read": true, 00:14:46.218 "reset": true, 00:14:46.218 "unmap": true, 00:14:46.218 "write": true, 00:14:46.218 "write_zeroes": true 00:14:46.218 }, 00:14:46.218 "uuid": "c3717eb0-bb2b-49cc-b85a-549a5daa56fb", 00:14:46.218 "zoned": false 00:14:46.218 } 00:14:46.218 ] 00:14:46.218 18:30:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83994 00:14:46.218 18:30:53 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.218 18:30:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:46.218 Running I/O for 10 seconds... 00:14:47.588 Latency(us) 00:14:47.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.588 Nvme0n1 : 1.00 7278.00 28.43 0.00 0.00 0.00 0.00 0.00 00:14:47.588 =================================================================================================================== 00:14:47.588 Total : 7278.00 28.43 0.00 0.00 0.00 0.00 0.00 00:14:47.588 00:14:48.153 18:30:55 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:48.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.411 Nvme0n1 : 2.00 7212.50 28.17 0.00 0.00 0.00 0.00 0.00 00:14:48.411 =================================================================================================================== 00:14:48.411 Total : 7212.50 28.17 0.00 0.00 0.00 0.00 0.00 00:14:48.411 00:14:48.411 true 00:14:48.668 18:30:55 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:48.668 18:30:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:48.926 18:30:56 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:48.926 18:30:56 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:48.926 18:30:56 -- target/nvmf_lvs_grow.sh@65 -- # wait 83994 00:14:49.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.492 Nvme0n1 : 3.00 7226.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:49.492 =================================================================================================================== 00:14:49.492 Total : 7226.00 28.23 0.00 0.00 0.00 0.00 0.00 00:14:49.492 00:14:50.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.421 Nvme0n1 : 4.00 7210.75 28.17 0.00 0.00 0.00 0.00 0.00 00:14:50.421 =================================================================================================================== 00:14:50.421 Total : 7210.75 28.17 0.00 0.00 0.00 0.00 0.00 00:14:50.421 00:14:51.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.353 Nvme0n1 : 5.00 7198.20 28.12 0.00 0.00 0.00 0.00 0.00 00:14:51.353 =================================================================================================================== 00:14:51.353 Total : 7198.20 28.12 0.00 0.00 0.00 0.00 0.00 00:14:51.353 00:14:52.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.288 Nvme0n1 : 6.00 7202.00 28.13 0.00 0.00 0.00 0.00 0.00 00:14:52.288 =================================================================================================================== 00:14:52.288 Total : 7202.00 28.13 0.00 0.00 0.00 0.00 0.00 00:14:52.288 00:14:53.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.244 Nvme0n1 : 7.00 7195.00 28.11 0.00 0.00 0.00 0.00 0.00 00:14:53.244 =================================================================================================================== 00:14:53.244 Total : 7195.00 28.11 0.00 0.00 0.00 0.00 0.00 00:14:53.244 00:14:54.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.687 Nvme0n1 : 8.00 7059.88 27.58 0.00 0.00 0.00 0.00 0.00 00:14:54.687 =================================================================================================================== 00:14:54.687 Total : 7059.88 27.58 0.00 0.00 0.00 0.00 0.00 00:14:54.687 00:14:55.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.253 Nvme0n1 : 9.00 7052.33 27.55 0.00 0.00 0.00 0.00 0.00 00:14:55.253 =================================================================================================================== 00:14:55.253 Total : 7052.33 27.55 0.00 0.00 0.00 0.00 0.00 00:14:55.253 00:14:56.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.625 Nvme0n1 : 10.00 7051.10 27.54 0.00 0.00 0.00 0.00 0.00 00:14:56.625 =================================================================================================================== 00:14:56.625 Total : 7051.10 27.54 0.00 0.00 0.00 0.00 0.00 00:14:56.625 00:14:56.625 00:14:56.625 Latency(us) 00:14:56.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.625 Nvme0n1 : 10.01 7055.32 27.56 0.00 0.00 18129.24 7983.48 168725.41 00:14:56.625 =================================================================================================================== 00:14:56.625 Total : 7055.32 27.56 0.00 0.00 18129.24 7983.48 168725.41 00:14:56.625 0 00:14:56.625 18:31:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83941 00:14:56.625 18:31:03 -- common/autotest_common.sh@926 -- # '[' -z 83941 ']' 00:14:56.625 18:31:03 -- common/autotest_common.sh@930 -- # kill -0 83941 00:14:56.625 18:31:03 -- common/autotest_common.sh@931 -- # uname 00:14:56.625 18:31:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.625 18:31:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83941 00:14:56.625 18:31:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:56.625 18:31:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:56.625 killing process with pid 83941 00:14:56.625 18:31:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83941' 00:14:56.626 18:31:03 -- common/autotest_common.sh@945 -- # kill 83941 00:14:56.626 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.626 00:14:56.626 Latency(us) 00:14:56.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.626 =================================================================================================================== 00:14:56.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.626 18:31:03 -- common/autotest_common.sh@950 -- # wait 83941 00:14:56.626 18:31:03 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.883 18:31:04 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:56.883 18:31:04 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83361 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@74 -- # wait 83361 00:14:57.140 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83361 Killed "${NVMF_APP[@]}" "$@" 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:57.140 18:31:04 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:57.140 18:31:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.140 18:31:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:57.140 18:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:57.140 18:31:04 -- nvmf/common.sh@469 -- # nvmfpid=84145 00:14:57.140 18:31:04 -- nvmf/common.sh@470 -- # waitforlisten 84145 00:14:57.140 18:31:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:57.140 18:31:04 -- common/autotest_common.sh@819 -- # '[' -z 84145 ']' 00:14:57.140 18:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.140 18:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.140 18:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.140 18:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.140 18:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:57.140 [2024-07-14 18:31:04.396366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:57.140 [2024-07-14 18:31:04.396458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.140 [2024-07-14 18:31:04.539069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.398 [2024-07-14 18:31:04.593596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.398 [2024-07-14 18:31:04.593756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.398 [2024-07-14 18:31:04.593767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.398 [2024-07-14 18:31:04.593775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.398 [2024-07-14 18:31:04.593799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.964 18:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.964 18:31:05 -- common/autotest_common.sh@852 -- # return 0 00:14:57.964 18:31:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.964 18:31:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:57.964 18:31:05 -- common/autotest_common.sh@10 -- # set +x 00:14:57.964 18:31:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.964 18:31:05 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.222 [2024-07-14 18:31:05.531104] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:58.222 [2024-07-14 18:31:05.531397] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:58.222 [2024-07-14 18:31:05.531663] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:58.222 18:31:05 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:58.222 18:31:05 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:58.222 18:31:05 -- common/autotest_common.sh@887 -- # local bdev_name=c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:58.222 18:31:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.222 18:31:05 -- common/autotest_common.sh@889 -- # local i 00:14:58.222 18:31:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.222 18:31:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.222 18:31:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.481 18:31:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3717eb0-bb2b-49cc-b85a-549a5daa56fb -t 2000 00:14:58.739 [ 00:14:58.739 { 00:14:58.739 "aliases": [ 00:14:58.739 "lvs/lvol" 00:14:58.739 ], 00:14:58.739 "assigned_rate_limits": { 00:14:58.739 "r_mbytes_per_sec": 0, 00:14:58.739 "rw_ios_per_sec": 0, 00:14:58.739 "rw_mbytes_per_sec": 0, 00:14:58.739 "w_mbytes_per_sec": 0 00:14:58.739 }, 00:14:58.739 "block_size": 4096, 00:14:58.739 "claimed": false, 00:14:58.739 "driver_specific": { 00:14:58.739 "lvol": { 00:14:58.739 "base_bdev": "aio_bdev", 00:14:58.739 "clone": false, 00:14:58.739 "esnap_clone": false, 00:14:58.739 "lvol_store_uuid": "1e399e7e-2e58-41d0-bfcd-c3f38a3903a7", 00:14:58.739 "snapshot": false, 00:14:58.739 "thin_provision": false 00:14:58.739 } 00:14:58.739 }, 00:14:58.739 "name": "c3717eb0-bb2b-49cc-b85a-549a5daa56fb", 00:14:58.739 "num_blocks": 38912, 00:14:58.739 "product_name": "Logical Volume", 00:14:58.739 "supported_io_types": { 00:14:58.739 "abort": false, 00:14:58.739 "compare": false, 00:14:58.739 "compare_and_write": false, 00:14:58.739 "flush": false, 00:14:58.739 "nvme_admin": false, 00:14:58.739 "nvme_io": false, 00:14:58.739 "read": true, 00:14:58.739 "reset": true, 00:14:58.739 "unmap": true, 00:14:58.739 "write": true, 00:14:58.739 "write_zeroes": true 00:14:58.739 }, 00:14:58.739 "uuid": "c3717eb0-bb2b-49cc-b85a-549a5daa56fb", 00:14:58.739 "zoned": false 00:14:58.739 } 00:14:58.739 ] 00:14:58.739 18:31:06 -- common/autotest_common.sh@895 -- # return 0 00:14:58.739 18:31:06 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:58.739 18:31:06 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:58.997 18:31:06 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:58.997 18:31:06 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:58.997 18:31:06 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:59.256 18:31:06 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:59.256 18:31:06 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.515 [2024-07-14 18:31:06.728789] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:59.515 18:31:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:59.515 18:31:06 -- common/autotest_common.sh@640 -- # local es=0 00:14:59.515 18:31:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:59.515 18:31:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.515 18:31:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:59.515 18:31:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.515 18:31:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:59.515 18:31:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.515 18:31:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:59.515 18:31:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.515 18:31:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:59.515 18:31:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:14:59.773 request: 00:14:59.773 { 00:14:59.773 "method": "bdev_lvol_get_lvstores", 00:14:59.773 "params": { 00:14:59.773 "uuid": "1e399e7e-2e58-41d0-bfcd-c3f38a3903a7" 00:14:59.773 } 00:14:59.773 } 00:14:59.773 Got JSON-RPC error response 00:14:59.773 GoRPCClient: error on JSON-RPC call 00:14:59.774 2024/07/14 18:31:06 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1e399e7e-2e58-41d0-bfcd-c3f38a3903a7], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:59.774 18:31:06 -- common/autotest_common.sh@643 -- # es=1 00:14:59.774 18:31:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:59.774 18:31:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:59.774 18:31:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:59.774 18:31:06 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.774 aio_bdev 00:14:59.774 18:31:07 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:59.774 18:31:07 -- common/autotest_common.sh@887 -- # local bdev_name=c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:14:59.774 18:31:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:59.774 18:31:07 -- common/autotest_common.sh@889 -- # local i 00:14:59.774 18:31:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:59.774 18:31:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:59.774 18:31:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:00.031 18:31:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3717eb0-bb2b-49cc-b85a-549a5daa56fb -t 2000 00:15:00.289 [ 00:15:00.289 { 00:15:00.289 "aliases": [ 00:15:00.289 "lvs/lvol" 00:15:00.289 ], 00:15:00.289 "assigned_rate_limits": { 00:15:00.289 "r_mbytes_per_sec": 0, 00:15:00.289 "rw_ios_per_sec": 0, 00:15:00.289 "rw_mbytes_per_sec": 0, 00:15:00.289 "w_mbytes_per_sec": 0 00:15:00.289 }, 00:15:00.289 "block_size": 4096, 00:15:00.289 "claimed": false, 00:15:00.289 "driver_specific": { 00:15:00.289 "lvol": { 00:15:00.289 "base_bdev": "aio_bdev", 00:15:00.289 "clone": false, 00:15:00.289 "esnap_clone": false, 00:15:00.289 "lvol_store_uuid": "1e399e7e-2e58-41d0-bfcd-c3f38a3903a7", 00:15:00.289 "snapshot": false, 00:15:00.289 "thin_provision": false 00:15:00.289 } 00:15:00.289 }, 00:15:00.289 "name": "c3717eb0-bb2b-49cc-b85a-549a5daa56fb", 00:15:00.289 "num_blocks": 38912, 00:15:00.289 "product_name": "Logical Volume", 00:15:00.289 "supported_io_types": { 00:15:00.289 "abort": false, 00:15:00.289 "compare": false, 00:15:00.289 "compare_and_write": false, 00:15:00.289 "flush": false, 00:15:00.289 "nvme_admin": false, 00:15:00.289 "nvme_io": false, 00:15:00.289 "read": true, 00:15:00.289 "reset": true, 00:15:00.289 "unmap": true, 00:15:00.289 "write": true, 00:15:00.289 "write_zeroes": true 00:15:00.289 }, 00:15:00.289 "uuid": "c3717eb0-bb2b-49cc-b85a-549a5daa56fb", 00:15:00.289 "zoned": false 00:15:00.289 } 00:15:00.289 ] 00:15:00.289 18:31:07 -- common/autotest_common.sh@895 -- # return 0 00:15:00.289 18:31:07 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:15:00.289 18:31:07 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:00.547 18:31:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:00.547 18:31:07 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:15:00.547 18:31:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:00.805 18:31:08 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:00.805 18:31:08 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c3717eb0-bb2b-49cc-b85a-549a5daa56fb 00:15:01.084 18:31:08 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e399e7e-2e58-41d0-bfcd-c3f38a3903a7 00:15:01.084 18:31:08 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.342 18:31:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:01.906 00:15:01.906 real 0m19.343s 00:15:01.906 user 0m37.811s 00:15:01.906 sys 0m9.980s 00:15:01.906 18:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.906 18:31:09 -- common/autotest_common.sh@10 -- # set +x 00:15:01.906 ************************************ 00:15:01.906 END TEST lvs_grow_dirty 00:15:01.906 ************************************ 00:15:01.906 18:31:09 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:01.906 18:31:09 -- common/autotest_common.sh@796 -- # type=--id 00:15:01.906 18:31:09 -- common/autotest_common.sh@797 -- # id=0 00:15:01.906 18:31:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:01.906 18:31:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:01.906 18:31:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:01.906 18:31:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:01.906 18:31:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:01.906 18:31:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:01.906 nvmf_trace.0 00:15:01.906 18:31:09 -- common/autotest_common.sh@811 -- # return 0 00:15:01.906 18:31:09 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:01.906 18:31:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.906 18:31:09 -- nvmf/common.sh@116 -- # sync 00:15:01.906 18:31:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.906 18:31:09 -- nvmf/common.sh@119 -- # set +e 00:15:01.906 18:31:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.906 18:31:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.906 rmmod nvme_tcp 00:15:02.164 rmmod nvme_fabrics 00:15:02.164 rmmod nvme_keyring 00:15:02.164 18:31:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.164 18:31:09 -- nvmf/common.sh@123 -- # set -e 00:15:02.164 18:31:09 -- nvmf/common.sh@124 -- # return 0 00:15:02.164 18:31:09 -- nvmf/common.sh@477 -- # '[' -n 84145 ']' 00:15:02.164 18:31:09 -- nvmf/common.sh@478 -- # killprocess 84145 00:15:02.164 18:31:09 -- common/autotest_common.sh@926 -- # '[' -z 84145 ']' 00:15:02.164 18:31:09 -- common/autotest_common.sh@930 -- # kill -0 84145 00:15:02.164 18:31:09 -- common/autotest_common.sh@931 -- # uname 00:15:02.164 18:31:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.164 18:31:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84145 00:15:02.164 18:31:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.164 18:31:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.164 killing process with pid 84145 00:15:02.164 18:31:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84145' 00:15:02.164 18:31:09 -- common/autotest_common.sh@945 -- # kill 84145 00:15:02.164 18:31:09 -- common/autotest_common.sh@950 -- # wait 84145 00:15:02.164 18:31:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.164 18:31:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.164 18:31:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.164 18:31:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.164 18:31:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.164 18:31:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.164 18:31:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.164 18:31:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.423 18:31:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:02.423 00:15:02.423 real 0m39.026s 00:15:02.423 user 1m0.230s 00:15:02.423 sys 0m12.759s 00:15:02.423 18:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.423 18:31:09 -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 ************************************ 00:15:02.423 END TEST nvmf_lvs_grow 00:15:02.423 ************************************ 00:15:02.423 18:31:09 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.423 18:31:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:02.423 18:31:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:02.423 18:31:09 -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 ************************************ 00:15:02.423 START TEST nvmf_bdev_io_wait 00:15:02.423 ************************************ 00:15:02.423 18:31:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.423 * Looking for test storage... 00:15:02.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:02.423 18:31:09 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.423 18:31:09 -- nvmf/common.sh@7 -- # uname -s 00:15:02.423 18:31:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.423 18:31:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.423 18:31:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.423 18:31:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.423 18:31:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.423 18:31:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.423 18:31:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.423 18:31:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.423 18:31:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.423 18:31:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:15:02.423 18:31:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:15:02.423 18:31:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.423 18:31:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.423 18:31:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.423 18:31:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.423 18:31:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.423 18:31:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.423 18:31:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.423 18:31:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.423 18:31:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.423 18:31:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.423 18:31:09 -- paths/export.sh@5 -- # export PATH 00:15:02.423 18:31:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.423 18:31:09 -- nvmf/common.sh@46 -- # : 0 00:15:02.423 18:31:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.423 18:31:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.423 18:31:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.423 18:31:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.423 18:31:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.423 18:31:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.423 18:31:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.423 18:31:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.423 18:31:09 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.423 18:31:09 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.423 18:31:09 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:02.423 18:31:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:02.423 18:31:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.423 18:31:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.423 18:31:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.423 18:31:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.423 18:31:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.423 18:31:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.423 18:31:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.423 18:31:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:02.423 18:31:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:02.423 18:31:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.423 18:31:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.423 18:31:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.423 18:31:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:02.423 18:31:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.423 18:31:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.423 18:31:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.423 18:31:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.423 18:31:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.423 18:31:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.423 18:31:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.423 18:31:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.423 18:31:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:02.423 18:31:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:02.423 Cannot find device "nvmf_tgt_br" 00:15:02.423 18:31:09 -- nvmf/common.sh@154 -- # true 00:15:02.423 18:31:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.423 Cannot find device "nvmf_tgt_br2" 00:15:02.423 18:31:09 -- nvmf/common.sh@155 -- # true 00:15:02.423 18:31:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:02.423 18:31:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:02.423 Cannot find device "nvmf_tgt_br" 00:15:02.423 18:31:09 -- nvmf/common.sh@157 -- # true 00:15:02.423 18:31:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:02.423 Cannot find device "nvmf_tgt_br2" 00:15:02.423 18:31:09 -- nvmf/common.sh@158 -- # true 00:15:02.423 18:31:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:02.681 18:31:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:02.681 18:31:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.681 18:31:09 -- nvmf/common.sh@161 -- # true 00:15:02.681 18:31:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.681 18:31:09 -- nvmf/common.sh@162 -- # true 00:15:02.681 18:31:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.681 18:31:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.682 18:31:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.682 18:31:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.682 18:31:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.682 18:31:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.682 18:31:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.682 18:31:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.682 18:31:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.682 18:31:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:02.682 18:31:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:02.682 18:31:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:02.682 18:31:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:02.682 18:31:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.682 18:31:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.682 18:31:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.682 18:31:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:02.682 18:31:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:02.682 18:31:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.682 18:31:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.682 18:31:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.682 18:31:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.682 18:31:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.682 18:31:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:02.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:02.682 00:15:02.682 --- 10.0.0.2 ping statistics --- 00:15:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.682 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:02.682 18:31:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:02.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:02.682 00:15:02.682 --- 10.0.0.3 ping statistics --- 00:15:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.682 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:02.682 18:31:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:02.682 00:15:02.682 --- 10.0.0.1 ping statistics --- 00:15:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.682 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:02.682 18:31:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.940 18:31:10 -- nvmf/common.sh@421 -- # return 0 00:15:02.940 18:31:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.940 18:31:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.940 18:31:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.940 18:31:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.940 18:31:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.940 18:31:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.940 18:31:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.940 18:31:10 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:02.940 18:31:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.940 18:31:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.940 18:31:10 -- common/autotest_common.sh@10 -- # set +x 00:15:02.940 18:31:10 -- nvmf/common.sh@469 -- # nvmfpid=84551 00:15:02.940 18:31:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:02.940 18:31:10 -- nvmf/common.sh@470 -- # waitforlisten 84551 00:15:02.940 18:31:10 -- common/autotest_common.sh@819 -- # '[' -z 84551 ']' 00:15:02.940 18:31:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.940 18:31:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.940 18:31:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.940 18:31:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.941 18:31:10 -- common/autotest_common.sh@10 -- # set +x 00:15:02.941 [2024-07-14 18:31:10.181585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.941 [2024-07-14 18:31:10.181671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.941 [2024-07-14 18:31:10.323105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.199 [2024-07-14 18:31:10.401196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:03.199 [2024-07-14 18:31:10.401310] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.199 [2024-07-14 18:31:10.401322] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.199 [2024-07-14 18:31:10.401329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.199 [2024-07-14 18:31:10.401483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.199 [2024-07-14 18:31:10.401646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.199 [2024-07-14 18:31:10.402240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.199 [2024-07-14 18:31:10.402304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.766 18:31:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.766 18:31:11 -- common/autotest_common.sh@852 -- # return 0 00:15:03.766 18:31:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.766 18:31:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:03.766 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 18:31:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 [2024-07-14 18:31:11.298659] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 Malloc0 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.024 18:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.024 18:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 [2024-07-14 18:31:11.354363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.024 18:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84604 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@30 -- # READ_PID=84606 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:04.024 18:31:11 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:04.024 18:31:11 -- nvmf/common.sh@520 -- # config=() 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.025 18:31:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.025 { 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme$subsystem", 00:15:04.025 "trtype": "$TEST_TRANSPORT", 00:15:04.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "$NVMF_PORT", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.025 "hdgst": ${hdgst:-false}, 00:15:04.025 "ddgst": ${ddgst:-false} 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 } 00:15:04.025 EOF 00:15:04.025 )") 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84608 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # config=() 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:04.025 18:31:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.025 { 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme$subsystem", 00:15:04.025 "trtype": "$TEST_TRANSPORT", 00:15:04.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "$NVMF_PORT", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.025 "hdgst": ${hdgst:-false}, 00:15:04.025 "ddgst": ${ddgst:-false} 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 } 00:15:04.025 EOF 00:15:04.025 )") 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # cat 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # cat 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # config=() 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.025 18:31:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.025 { 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme$subsystem", 00:15:04.025 "trtype": "$TEST_TRANSPORT", 00:15:04.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "$NVMF_PORT", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.025 "hdgst": ${hdgst:-false}, 00:15:04.025 "ddgst": ${ddgst:-false} 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 } 00:15:04.025 EOF 00:15:04.025 )") 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84611 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@35 -- # sync 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # cat 00:15:04.025 18:31:11 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # config=() 00:15:04.025 18:31:11 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.025 18:31:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.025 { 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme$subsystem", 00:15:04.025 "trtype": "$TEST_TRANSPORT", 00:15:04.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "$NVMF_PORT", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.025 "hdgst": ${hdgst:-false}, 00:15:04.025 "ddgst": ${ddgst:-false} 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 } 00:15:04.025 EOF 00:15:04.025 )") 00:15:04.025 18:31:11 -- nvmf/common.sh@544 -- # jq . 00:15:04.025 18:31:11 -- nvmf/common.sh@544 -- # jq . 00:15:04.025 18:31:11 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.025 18:31:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme1", 00:15:04.025 "trtype": "tcp", 00:15:04.025 "traddr": "10.0.0.2", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "4420", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.025 "hdgst": false, 00:15:04.025 "ddgst": false 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 }' 00:15:04.025 18:31:11 -- nvmf/common.sh@542 -- # cat 00:15:04.025 18:31:11 -- nvmf/common.sh@544 -- # jq . 00:15:04.025 18:31:11 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.025 18:31:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme1", 00:15:04.025 "trtype": "tcp", 00:15:04.025 "traddr": "10.0.0.2", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "4420", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.025 "hdgst": false, 00:15:04.025 "ddgst": false 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 }' 00:15:04.025 18:31:11 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.025 18:31:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme1", 00:15:04.025 "trtype": "tcp", 00:15:04.025 "traddr": "10.0.0.2", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "4420", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.025 "hdgst": false, 00:15:04.025 "ddgst": false 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 }' 00:15:04.025 18:31:11 -- nvmf/common.sh@544 -- # jq . 00:15:04.025 18:31:11 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.025 18:31:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.025 "params": { 00:15:04.025 "name": "Nvme1", 00:15:04.025 "trtype": "tcp", 00:15:04.025 "traddr": "10.0.0.2", 00:15:04.025 "adrfam": "ipv4", 00:15:04.025 "trsvcid": "4420", 00:15:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.025 "hdgst": false, 00:15:04.025 "ddgst": false 00:15:04.025 }, 00:15:04.025 "method": "bdev_nvme_attach_controller" 00:15:04.025 }' 00:15:04.025 [2024-07-14 18:31:11.415342] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.025 [2024-07-14 18:31:11.415433] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:04.026 [2024-07-14 18:31:11.417273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.026 [2024-07-14 18:31:11.417485] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:04.026 18:31:11 -- target/bdev_io_wait.sh@37 -- # wait 84604 00:15:04.284 [2024-07-14 18:31:11.448486] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.284 [2024-07-14 18:31:11.448574] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:04.284 [2024-07-14 18:31:11.449693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.284 [2024-07-14 18:31:11.449763] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:04.284 [2024-07-14 18:31:11.632759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.284 [2024-07-14 18:31:11.702803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:04.542 [2024-07-14 18:31:11.711545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.542 [2024-07-14 18:31:11.781790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:04.542 [2024-07-14 18:31:11.786677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.542 [2024-07-14 18:31:11.855960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:04.542 [2024-07-14 18:31:11.858918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.542 Running I/O for 1 seconds... 00:15:04.542 Running I/O for 1 seconds... 00:15:04.542 [2024-07-14 18:31:11.927681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:04.800 Running I/O for 1 seconds... 00:15:04.800 Running I/O for 1 seconds... 00:15:05.733 00:15:05.733 Latency(us) 00:15:05.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.733 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:05.733 Nvme1n1 : 1.02 7366.56 28.78 0.00 0.00 17148.33 8102.63 31933.91 00:15:05.733 =================================================================================================================== 00:15:05.733 Total : 7366.56 28.78 0.00 0.00 17148.33 8102.63 31933.91 00:15:05.733 00:15:05.733 Latency(us) 00:15:05.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.733 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:05.733 Nvme1n1 : 1.00 202153.94 789.66 0.00 0.00 630.75 260.65 841.54 00:15:05.733 =================================================================================================================== 00:15:05.733 Total : 202153.94 789.66 0.00 0.00 630.75 260.65 841.54 00:15:05.733 00:15:05.733 Latency(us) 00:15:05.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.733 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:05.733 Nvme1n1 : 1.01 8695.63 33.97 0.00 0.00 14641.60 4706.68 19422.49 00:15:05.733 =================================================================================================================== 00:15:05.733 Total : 8695.63 33.97 0.00 0.00 14641.60 4706.68 19422.49 00:15:05.733 00:15:05.733 Latency(us) 00:15:05.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.733 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:05.733 Nvme1n1 : 1.01 7232.46 28.25 0.00 0.00 17644.09 5332.25 43611.23 00:15:05.733 =================================================================================================================== 00:15:05.733 Total : 7232.46 28.25 0.00 0.00 17644.09 5332.25 43611.23 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@38 -- # wait 84606 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@39 -- # wait 84608 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@40 -- # wait 84611 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.990 18:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.990 18:31:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.990 18:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:05.990 18:31:13 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:05.990 18:31:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.990 18:31:13 -- nvmf/common.sh@116 -- # sync 00:15:06.247 18:31:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:06.247 18:31:13 -- nvmf/common.sh@119 -- # set +e 00:15:06.247 18:31:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.247 18:31:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:06.247 rmmod nvme_tcp 00:15:06.247 rmmod nvme_fabrics 00:15:06.247 rmmod nvme_keyring 00:15:06.247 18:31:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.247 18:31:13 -- nvmf/common.sh@123 -- # set -e 00:15:06.247 18:31:13 -- nvmf/common.sh@124 -- # return 0 00:15:06.247 18:31:13 -- nvmf/common.sh@477 -- # '[' -n 84551 ']' 00:15:06.247 18:31:13 -- nvmf/common.sh@478 -- # killprocess 84551 00:15:06.247 18:31:13 -- common/autotest_common.sh@926 -- # '[' -z 84551 ']' 00:15:06.247 18:31:13 -- common/autotest_common.sh@930 -- # kill -0 84551 00:15:06.247 18:31:13 -- common/autotest_common.sh@931 -- # uname 00:15:06.247 18:31:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.247 18:31:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84551 00:15:06.247 killing process with pid 84551 00:15:06.247 18:31:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.247 18:31:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.247 18:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84551' 00:15:06.247 18:31:13 -- common/autotest_common.sh@945 -- # kill 84551 00:15:06.247 18:31:13 -- common/autotest_common.sh@950 -- # wait 84551 00:15:06.506 18:31:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.506 18:31:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.506 18:31:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.506 18:31:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.506 18:31:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.506 18:31:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.506 18:31:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:06.506 ************************************ 00:15:06.506 END TEST nvmf_bdev_io_wait 00:15:06.506 ************************************ 00:15:06.506 00:15:06.506 real 0m4.069s 00:15:06.506 user 0m18.139s 00:15:06.506 sys 0m1.929s 00:15:06.506 18:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.506 18:31:13 -- common/autotest_common.sh@10 -- # set +x 00:15:06.506 18:31:13 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:06.506 18:31:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:06.506 18:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.506 18:31:13 -- common/autotest_common.sh@10 -- # set +x 00:15:06.506 ************************************ 00:15:06.506 START TEST nvmf_queue_depth 00:15:06.506 ************************************ 00:15:06.506 18:31:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:06.506 * Looking for test storage... 00:15:06.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:06.506 18:31:13 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.506 18:31:13 -- nvmf/common.sh@7 -- # uname -s 00:15:06.506 18:31:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.506 18:31:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.506 18:31:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.506 18:31:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.506 18:31:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.506 18:31:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.506 18:31:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.506 18:31:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.506 18:31:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.506 18:31:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:15:06.506 18:31:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:15:06.506 18:31:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.506 18:31:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.506 18:31:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.506 18:31:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.506 18:31:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.506 18:31:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.506 18:31:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.506 18:31:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.506 18:31:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.506 18:31:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.506 18:31:13 -- paths/export.sh@5 -- # export PATH 00:15:06.506 18:31:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.506 18:31:13 -- nvmf/common.sh@46 -- # : 0 00:15:06.506 18:31:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.506 18:31:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.506 18:31:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.506 18:31:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.506 18:31:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.506 18:31:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.506 18:31:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.506 18:31:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.506 18:31:13 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:06.506 18:31:13 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:06.506 18:31:13 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.506 18:31:13 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:06.506 18:31:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.506 18:31:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.506 18:31:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.506 18:31:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.506 18:31:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.506 18:31:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.506 18:31:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.506 18:31:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.506 18:31:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:06.506 18:31:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:06.506 18:31:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.506 18:31:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.506 18:31:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.506 18:31:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:06.506 18:31:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.506 18:31:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.506 18:31:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.506 18:31:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.506 18:31:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.506 18:31:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.506 18:31:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.506 18:31:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.506 18:31:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:06.506 18:31:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:06.506 Cannot find device "nvmf_tgt_br" 00:15:06.506 18:31:13 -- nvmf/common.sh@154 -- # true 00:15:06.764 18:31:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.764 Cannot find device "nvmf_tgt_br2" 00:15:06.764 18:31:13 -- nvmf/common.sh@155 -- # true 00:15:06.764 18:31:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:06.764 18:31:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:06.764 Cannot find device "nvmf_tgt_br" 00:15:06.764 18:31:13 -- nvmf/common.sh@157 -- # true 00:15:06.764 18:31:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:06.764 Cannot find device "nvmf_tgt_br2" 00:15:06.764 18:31:13 -- nvmf/common.sh@158 -- # true 00:15:06.764 18:31:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:06.764 18:31:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:06.764 18:31:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.764 18:31:14 -- nvmf/common.sh@161 -- # true 00:15:06.764 18:31:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.764 18:31:14 -- nvmf/common.sh@162 -- # true 00:15:06.764 18:31:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.764 18:31:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.764 18:31:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.764 18:31:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.764 18:31:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.764 18:31:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.764 18:31:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.764 18:31:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.764 18:31:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.764 18:31:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:06.764 18:31:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:06.764 18:31:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:06.764 18:31:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:06.764 18:31:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.764 18:31:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.764 18:31:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.764 18:31:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:06.764 18:31:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:06.764 18:31:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.764 18:31:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.764 18:31:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.764 18:31:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.765 18:31:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.765 18:31:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:06.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:06.765 00:15:06.765 --- 10.0.0.2 ping statistics --- 00:15:06.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.765 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:06.765 18:31:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:06.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:06.765 00:15:06.765 --- 10.0.0.3 ping statistics --- 00:15:06.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.765 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:06.765 18:31:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:06.765 00:15:06.765 --- 10.0.0.1 ping statistics --- 00:15:06.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.765 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:06.765 18:31:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.765 18:31:14 -- nvmf/common.sh@421 -- # return 0 00:15:06.765 18:31:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:06.765 18:31:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.765 18:31:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:06.765 18:31:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:06.765 18:31:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.765 18:31:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:06.765 18:31:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.022 18:31:14 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:07.022 18:31:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.022 18:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:07.022 18:31:14 -- common/autotest_common.sh@10 -- # set +x 00:15:07.022 18:31:14 -- nvmf/common.sh@469 -- # nvmfpid=84847 00:15:07.022 18:31:14 -- nvmf/common.sh@470 -- # waitforlisten 84847 00:15:07.022 18:31:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:07.022 18:31:14 -- common/autotest_common.sh@819 -- # '[' -z 84847 ']' 00:15:07.022 18:31:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.022 18:31:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.022 18:31:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.022 18:31:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.022 18:31:14 -- common/autotest_common.sh@10 -- # set +x 00:15:07.022 [2024-07-14 18:31:14.258392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:07.022 [2024-07-14 18:31:14.258473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.022 [2024-07-14 18:31:14.396308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.280 [2024-07-14 18:31:14.457468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.280 [2024-07-14 18:31:14.457665] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.280 [2024-07-14 18:31:14.457680] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.280 [2024-07-14 18:31:14.457689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.280 [2024-07-14 18:31:14.457718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.847 18:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:07.847 18:31:15 -- common/autotest_common.sh@852 -- # return 0 00:15:07.847 18:31:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.847 18:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:07.847 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.105 18:31:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.105 18:31:15 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.105 18:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.105 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.105 [2024-07-14 18:31:15.293891] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.105 18:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.105 18:31:15 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.105 18:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.105 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.105 Malloc0 00:15:08.106 18:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.106 18:31:15 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.106 18:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.106 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.106 18:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.106 18:31:15 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.106 18:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.106 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.106 18:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.106 18:31:15 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.106 18:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.106 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.106 [2024-07-14 18:31:15.356277] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.106 18:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.106 18:31:15 -- target/queue_depth.sh@30 -- # bdevperf_pid=84897 00:15:08.106 18:31:15 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:08.106 18:31:15 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.106 18:31:15 -- target/queue_depth.sh@33 -- # waitforlisten 84897 /var/tmp/bdevperf.sock 00:15:08.106 18:31:15 -- common/autotest_common.sh@819 -- # '[' -z 84897 ']' 00:15:08.106 18:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.106 18:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.106 18:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.106 18:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.106 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.106 [2024-07-14 18:31:15.404651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.106 [2024-07-14 18:31:15.404742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84897 ] 00:15:08.363 [2024-07-14 18:31:15.540178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.363 [2024-07-14 18:31:15.604759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.297 18:31:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:09.297 18:31:16 -- common/autotest_common.sh@852 -- # return 0 00:15:09.297 18:31:16 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.297 18:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.297 18:31:16 -- common/autotest_common.sh@10 -- # set +x 00:15:09.297 NVMe0n1 00:15:09.297 18:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.297 18:31:16 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.297 Running I/O for 10 seconds... 00:15:19.259 00:15:19.259 Latency(us) 00:15:19.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.259 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:19.259 Verification LBA range: start 0x0 length 0x4000 00:15:19.259 NVMe0n1 : 10.05 16439.73 64.22 0.00 0.00 62077.30 13762.56 55050.24 00:15:19.259 =================================================================================================================== 00:15:19.259 Total : 16439.73 64.22 0.00 0.00 62077.30 13762.56 55050.24 00:15:19.259 0 00:15:19.517 18:31:26 -- target/queue_depth.sh@39 -- # killprocess 84897 00:15:19.517 18:31:26 -- common/autotest_common.sh@926 -- # '[' -z 84897 ']' 00:15:19.517 18:31:26 -- common/autotest_common.sh@930 -- # kill -0 84897 00:15:19.517 18:31:26 -- common/autotest_common.sh@931 -- # uname 00:15:19.517 18:31:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:19.517 18:31:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84897 00:15:19.517 18:31:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:19.517 18:31:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:19.517 killing process with pid 84897 00:15:19.517 18:31:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84897' 00:15:19.517 18:31:26 -- common/autotest_common.sh@945 -- # kill 84897 00:15:19.517 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.517 00:15:19.517 Latency(us) 00:15:19.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.517 =================================================================================================================== 00:15:19.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.517 18:31:26 -- common/autotest_common.sh@950 -- # wait 84897 00:15:19.517 18:31:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:19.517 18:31:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:19.517 18:31:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.517 18:31:26 -- nvmf/common.sh@116 -- # sync 00:15:19.775 18:31:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.775 18:31:26 -- nvmf/common.sh@119 -- # set +e 00:15:19.775 18:31:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.775 18:31:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.775 rmmod nvme_tcp 00:15:19.775 rmmod nvme_fabrics 00:15:19.775 rmmod nvme_keyring 00:15:19.775 18:31:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.775 18:31:27 -- nvmf/common.sh@123 -- # set -e 00:15:19.775 18:31:27 -- nvmf/common.sh@124 -- # return 0 00:15:19.775 18:31:27 -- nvmf/common.sh@477 -- # '[' -n 84847 ']' 00:15:19.775 18:31:27 -- nvmf/common.sh@478 -- # killprocess 84847 00:15:19.775 18:31:27 -- common/autotest_common.sh@926 -- # '[' -z 84847 ']' 00:15:19.775 18:31:27 -- common/autotest_common.sh@930 -- # kill -0 84847 00:15:19.775 18:31:27 -- common/autotest_common.sh@931 -- # uname 00:15:19.775 18:31:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:19.775 18:31:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84847 00:15:19.775 18:31:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:19.775 18:31:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:19.775 killing process with pid 84847 00:15:19.775 18:31:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84847' 00:15:19.775 18:31:27 -- common/autotest_common.sh@945 -- # kill 84847 00:15:19.775 18:31:27 -- common/autotest_common.sh@950 -- # wait 84847 00:15:20.035 18:31:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.035 18:31:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.035 18:31:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.035 18:31:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.035 18:31:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.035 18:31:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.035 18:31:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:20.035 ************************************ 00:15:20.035 END TEST nvmf_queue_depth 00:15:20.035 ************************************ 00:15:20.035 00:15:20.035 real 0m13.492s 00:15:20.035 user 0m23.091s 00:15:20.035 sys 0m2.177s 00:15:20.035 18:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.035 18:31:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 18:31:27 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.035 18:31:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:20.035 18:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:20.035 18:31:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 ************************************ 00:15:20.035 START TEST nvmf_multipath 00:15:20.035 ************************************ 00:15:20.035 18:31:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.035 * Looking for test storage... 00:15:20.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:20.035 18:31:27 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.035 18:31:27 -- nvmf/common.sh@7 -- # uname -s 00:15:20.035 18:31:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.035 18:31:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.035 18:31:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.035 18:31:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.035 18:31:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.035 18:31:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.035 18:31:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.035 18:31:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.035 18:31:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.035 18:31:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:15:20.035 18:31:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:15:20.035 18:31:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.035 18:31:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.035 18:31:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.035 18:31:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.035 18:31:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.035 18:31:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.035 18:31:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.035 18:31:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.035 18:31:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.035 18:31:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.035 18:31:27 -- paths/export.sh@5 -- # export PATH 00:15:20.035 18:31:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.035 18:31:27 -- nvmf/common.sh@46 -- # : 0 00:15:20.035 18:31:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.035 18:31:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.035 18:31:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.035 18:31:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.035 18:31:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.035 18:31:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.035 18:31:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.035 18:31:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.035 18:31:27 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.035 18:31:27 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.035 18:31:27 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:20.035 18:31:27 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.035 18:31:27 -- target/multipath.sh@43 -- # nvmftestinit 00:15:20.035 18:31:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.035 18:31:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.035 18:31:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.035 18:31:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.035 18:31:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.035 18:31:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.035 18:31:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.035 18:31:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.035 18:31:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.035 18:31:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.035 18:31:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.035 18:31:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.035 18:31:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.035 18:31:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.035 18:31:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.035 18:31:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.035 18:31:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.035 18:31:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.035 18:31:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.035 18:31:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.035 18:31:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.036 18:31:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.036 18:31:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.036 18:31:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.036 Cannot find device "nvmf_tgt_br" 00:15:20.036 18:31:27 -- nvmf/common.sh@154 -- # true 00:15:20.036 18:31:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.294 Cannot find device "nvmf_tgt_br2" 00:15:20.294 18:31:27 -- nvmf/common.sh@155 -- # true 00:15:20.294 18:31:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.294 18:31:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.294 Cannot find device "nvmf_tgt_br" 00:15:20.294 18:31:27 -- nvmf/common.sh@157 -- # true 00:15:20.294 18:31:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.294 Cannot find device "nvmf_tgt_br2" 00:15:20.294 18:31:27 -- nvmf/common.sh@158 -- # true 00:15:20.294 18:31:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.294 18:31:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.294 18:31:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.294 18:31:27 -- nvmf/common.sh@161 -- # true 00:15:20.294 18:31:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.294 18:31:27 -- nvmf/common.sh@162 -- # true 00:15:20.294 18:31:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.294 18:31:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.294 18:31:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.294 18:31:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.294 18:31:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.294 18:31:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.294 18:31:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.294 18:31:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.294 18:31:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.294 18:31:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.294 18:31:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.294 18:31:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.294 18:31:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.294 18:31:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.294 18:31:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.294 18:31:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.294 18:31:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.552 18:31:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.552 18:31:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.552 18:31:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.552 18:31:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.552 18:31:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.552 18:31:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.552 18:31:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:20.552 00:15:20.552 --- 10.0.0.2 ping statistics --- 00:15:20.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.552 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:20.552 18:31:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:15:20.552 00:15:20.552 --- 10.0.0.3 ping statistics --- 00:15:20.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.552 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:20.552 18:31:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:20.552 00:15:20.552 --- 10.0.0.1 ping statistics --- 00:15:20.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.552 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:20.552 18:31:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.552 18:31:27 -- nvmf/common.sh@421 -- # return 0 00:15:20.552 18:31:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.552 18:31:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.552 18:31:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.552 18:31:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.552 18:31:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.552 18:31:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.552 18:31:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.552 18:31:27 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:20.552 18:31:27 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:20.552 18:31:27 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:20.552 18:31:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.552 18:31:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:20.552 18:31:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.552 18:31:27 -- nvmf/common.sh@469 -- # nvmfpid=85227 00:15:20.552 18:31:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.552 18:31:27 -- nvmf/common.sh@470 -- # waitforlisten 85227 00:15:20.552 18:31:27 -- common/autotest_common.sh@819 -- # '[' -z 85227 ']' 00:15:20.552 18:31:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.552 18:31:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:20.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.552 18:31:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.552 18:31:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:20.552 18:31:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.552 [2024-07-14 18:31:27.858884] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:20.552 [2024-07-14 18:31:27.858971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.811 [2024-07-14 18:31:27.997732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.811 [2024-07-14 18:31:28.057465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.811 [2024-07-14 18:31:28.057918] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.811 [2024-07-14 18:31:28.058035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.811 [2024-07-14 18:31:28.058168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.811 [2024-07-14 18:31:28.058547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.811 [2024-07-14 18:31:28.058643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.811 [2024-07-14 18:31:28.059186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.811 [2024-07-14 18:31:28.059218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.744 18:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:21.744 18:31:28 -- common/autotest_common.sh@852 -- # return 0 00:15:21.744 18:31:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.744 18:31:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:21.744 18:31:28 -- common/autotest_common.sh@10 -- # set +x 00:15:21.744 18:31:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.744 18:31:28 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:21.744 [2024-07-14 18:31:29.116071] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.744 18:31:29 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:22.002 Malloc0 00:15:22.260 18:31:29 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:22.260 18:31:29 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.517 18:31:29 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.773 [2024-07-14 18:31:30.049821] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.773 18:31:30 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:23.030 [2024-07-14 18:31:30.258010] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.030 18:31:30 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:23.287 18:31:30 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:23.287 18:31:30 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.287 18:31:30 -- common/autotest_common.sh@1177 -- # local i=0 00:15:23.287 18:31:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.287 18:31:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:23.287 18:31:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:25.811 18:31:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:25.811 18:31:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:25.811 18:31:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.811 18:31:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:25.811 18:31:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.811 18:31:32 -- common/autotest_common.sh@1187 -- # return 0 00:15:25.811 18:31:32 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:25.811 18:31:32 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:25.811 18:31:32 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:25.811 18:31:32 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:25.811 18:31:32 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:25.811 18:31:32 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:25.811 18:31:32 -- target/multipath.sh@38 -- # return 0 00:15:25.811 18:31:32 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:25.811 18:31:32 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:25.811 18:31:32 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:25.811 18:31:32 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:25.811 18:31:32 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:25.811 18:31:32 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:25.811 18:31:32 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:25.811 18:31:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:25.811 18:31:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.811 18:31:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:25.811 18:31:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:25.811 18:31:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:25.811 18:31:32 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:25.811 18:31:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:25.811 18:31:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.811 18:31:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:25.811 18:31:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.811 18:31:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:25.811 18:31:32 -- target/multipath.sh@85 -- # echo numa 00:15:25.811 18:31:32 -- target/multipath.sh@88 -- # fio_pid=85359 00:15:25.811 18:31:32 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:25.811 18:31:32 -- target/multipath.sh@90 -- # sleep 1 00:15:25.811 [global] 00:15:25.811 thread=1 00:15:25.811 invalidate=1 00:15:25.811 rw=randrw 00:15:25.811 time_based=1 00:15:25.811 runtime=6 00:15:25.811 ioengine=libaio 00:15:25.811 direct=1 00:15:25.811 bs=4096 00:15:25.811 iodepth=128 00:15:25.811 norandommap=0 00:15:25.811 numjobs=1 00:15:25.811 00:15:25.811 verify_dump=1 00:15:25.811 verify_backlog=512 00:15:25.811 verify_state_save=0 00:15:25.811 do_verify=1 00:15:25.811 verify=crc32c-intel 00:15:25.811 [job0] 00:15:25.811 filename=/dev/nvme0n1 00:15:25.811 Could not set queue depth (nvme0n1) 00:15:25.811 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.811 fio-3.35 00:15:25.811 Starting 1 thread 00:15:26.377 18:31:33 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:26.634 18:31:34 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:26.892 18:31:34 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:26.892 18:31:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:26.892 18:31:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.892 18:31:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.892 18:31:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.892 18:31:34 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.892 18:31:34 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:26.892 18:31:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:26.892 18:31:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.892 18:31:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.892 18:31:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.892 18:31:34 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.892 18:31:34 -- target/multipath.sh@25 -- # sleep 1s 00:15:28.260 18:31:35 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:28.260 18:31:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.260 18:31:35 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:28.260 18:31:35 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:28.260 18:31:35 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:28.519 18:31:35 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:28.519 18:31:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:28.519 18:31:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:28.519 18:31:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:28.519 18:31:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:28.519 18:31:35 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:28.519 18:31:35 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:28.519 18:31:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:28.519 18:31:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:28.519 18:31:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:28.519 18:31:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.519 18:31:35 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:28.519 18:31:35 -- target/multipath.sh@25 -- # sleep 1s 00:15:29.508 18:31:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:29.508 18:31:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.508 18:31:36 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.508 18:31:36 -- target/multipath.sh@104 -- # wait 85359 00:15:32.037 00:15:32.037 job0: (groupid=0, jobs=1): err= 0: pid=85386: Sun Jul 14 18:31:39 2024 00:15:32.037 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(281MiB/6003msec) 00:15:32.037 slat (usec): min=5, max=6251, avg=47.78, stdev=213.23 00:15:32.037 clat (usec): min=1597, max=13303, avg=7314.22, stdev=1152.39 00:15:32.037 lat (usec): min=1835, max=13312, avg=7362.00, stdev=1159.59 00:15:32.037 clat percentiles (usec): 00:15:32.037 | 1.00th=[ 4424], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6456], 00:15:32.037 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:15:32.037 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9241], 00:15:32.037 | 99.00th=[10814], 99.50th=[11338], 99.90th=[11994], 99.95th=[12256], 00:15:32.037 | 99.99th=[12780] 00:15:32.037 bw ( KiB/s): min=16416, max=29848, per=53.56%, avg=25637.09, stdev=4284.81, samples=11 00:15:32.037 iops : min= 4104, max= 7462, avg=6409.27, stdev=1071.20, samples=11 00:15:32.037 write: IOPS=6839, BW=26.7MiB/s (28.0MB/s)(145MiB/5429msec); 0 zone resets 00:15:32.037 slat (usec): min=15, max=2378, avg=60.02, stdev=149.10 00:15:32.037 clat (usec): min=2121, max=12413, avg=6352.87, stdev=922.93 00:15:32.037 lat (usec): min=2159, max=12437, avg=6412.90, stdev=925.84 00:15:32.037 clat percentiles (usec): 00:15:32.037 | 1.00th=[ 3556], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 5800], 00:15:32.037 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6587], 00:15:32.037 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7504], 00:15:32.037 | 99.00th=[ 9110], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11863], 00:15:32.037 | 99.99th=[12256] 00:15:32.038 bw ( KiB/s): min=16384, max=30928, per=93.47%, avg=25572.36, stdev=4285.55, samples=11 00:15:32.038 iops : min= 4096, max= 7732, avg=6393.09, stdev=1071.39, samples=11 00:15:32.038 lat (msec) : 2=0.01%, 4=1.19%, 10=96.95%, 20=1.85% 00:15:32.038 cpu : usr=5.88%, sys=23.64%, ctx=6515, majf=0, minf=108 00:15:32.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:32.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.038 issued rwts: total=71831,37132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.038 00:15:32.038 Run status group 0 (all jobs): 00:15:32.038 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=281MiB (294MB), run=6003-6003msec 00:15:32.038 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=145MiB (152MB), run=5429-5429msec 00:15:32.038 00:15:32.038 Disk stats (read/write): 00:15:32.038 nvme0n1: ios=70097/37132, merge=0/0, ticks=479635/220251, in_queue=699886, util=98.65% 00:15:32.038 18:31:39 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:32.038 18:31:39 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:32.296 18:31:39 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:32.296 18:31:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:32.296 18:31:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.296 18:31:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:32.296 18:31:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:32.296 18:31:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:32.296 18:31:39 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:32.296 18:31:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:32.296 18:31:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.296 18:31:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:32.296 18:31:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.296 18:31:39 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:32.296 18:31:39 -- target/multipath.sh@25 -- # sleep 1s 00:15:33.229 18:31:40 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:33.229 18:31:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.229 18:31:40 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:33.229 18:31:40 -- target/multipath.sh@113 -- # echo round-robin 00:15:33.229 18:31:40 -- target/multipath.sh@116 -- # fio_pid=85511 00:15:33.229 18:31:40 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:33.229 18:31:40 -- target/multipath.sh@118 -- # sleep 1 00:15:33.229 [global] 00:15:33.229 thread=1 00:15:33.229 invalidate=1 00:15:33.229 rw=randrw 00:15:33.229 time_based=1 00:15:33.229 runtime=6 00:15:33.229 ioengine=libaio 00:15:33.229 direct=1 00:15:33.229 bs=4096 00:15:33.229 iodepth=128 00:15:33.229 norandommap=0 00:15:33.229 numjobs=1 00:15:33.229 00:15:33.229 verify_dump=1 00:15:33.229 verify_backlog=512 00:15:33.229 verify_state_save=0 00:15:33.229 do_verify=1 00:15:33.229 verify=crc32c-intel 00:15:33.229 [job0] 00:15:33.229 filename=/dev/nvme0n1 00:15:33.229 Could not set queue depth (nvme0n1) 00:15:33.487 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.487 fio-3.35 00:15:33.487 Starting 1 thread 00:15:34.422 18:31:41 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:34.422 18:31:41 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:34.681 18:31:42 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:34.681 18:31:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:34.681 18:31:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:34.681 18:31:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:34.681 18:31:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:34.681 18:31:42 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:34.681 18:31:42 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:34.681 18:31:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:34.681 18:31:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:34.681 18:31:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:34.681 18:31:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.681 18:31:42 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:34.681 18:31:42 -- target/multipath.sh@25 -- # sleep 1s 00:15:35.614 18:31:43 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:35.614 18:31:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.614 18:31:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:35.614 18:31:43 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:36.179 18:31:43 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:36.179 18:31:43 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:36.179 18:31:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:36.179 18:31:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.179 18:31:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:36.179 18:31:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:36.179 18:31:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:36.179 18:31:43 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:36.179 18:31:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:36.179 18:31:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.180 18:31:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:36.180 18:31:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.180 18:31:43 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:36.180 18:31:43 -- target/multipath.sh@25 -- # sleep 1s 00:15:37.110 18:31:44 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:37.110 18:31:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.369 18:31:44 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:37.369 18:31:44 -- target/multipath.sh@132 -- # wait 85511 00:15:39.898 00:15:39.898 job0: (groupid=0, jobs=1): err= 0: pid=85532: Sun Jul 14 18:31:46 2024 00:15:39.898 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(312MiB/6005msec) 00:15:39.898 slat (usec): min=2, max=8881, avg=39.50, stdev=189.53 00:15:39.898 clat (usec): min=630, max=16927, avg=6691.86, stdev=1417.11 00:15:39.898 lat (usec): min=653, max=16956, avg=6731.36, stdev=1432.15 00:15:39.898 clat percentiles (usec): 00:15:39.898 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:15:39.898 | 30.00th=[ 6194], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6980], 00:15:39.898 | 70.00th=[ 7373], 80.00th=[ 7832], 90.00th=[ 8356], 95.00th=[ 8848], 00:15:39.898 | 99.00th=[10421], 99.50th=[10945], 99.90th=[11994], 99.95th=[12518], 00:15:39.898 | 99.99th=[13698] 00:15:39.898 bw ( KiB/s): min=17192, max=48040, per=53.13%, avg=28290.91, stdev=9667.78, samples=11 00:15:39.898 iops : min= 4298, max=12010, avg=7072.73, stdev=2416.95, samples=11 00:15:39.898 write: IOPS=8088, BW=31.6MiB/s (33.1MB/s)(159MiB/5018msec); 0 zone resets 00:15:39.898 slat (usec): min=4, max=5453, avg=50.19, stdev=124.06 00:15:39.898 clat (usec): min=483, max=12688, avg=5592.29, stdev=1476.14 00:15:39.898 lat (usec): min=560, max=12713, avg=5642.48, stdev=1488.18 00:15:39.898 clat percentiles (usec): 00:15:39.898 | 1.00th=[ 2507], 5.00th=[ 3064], 10.00th=[ 3425], 20.00th=[ 4015], 00:15:39.898 | 30.00th=[ 4752], 40.00th=[ 5604], 50.00th=[ 5997], 60.00th=[ 6259], 00:15:39.898 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7504], 00:15:39.898 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11076], 99.95th=[11731], 00:15:39.898 | 99.99th=[12256] 00:15:39.898 bw ( KiB/s): min=18096, max=47064, per=87.49%, avg=28308.36, stdev=9259.93, samples=11 00:15:39.898 iops : min= 4524, max=11766, avg=7077.09, stdev=2314.98, samples=11 00:15:39.898 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:39.898 lat (msec) : 2=0.08%, 4=9.00%, 10=89.72%, 20=1.18% 00:15:39.898 cpu : usr=6.56%, sys=25.73%, ctx=8090, majf=0, minf=121 00:15:39.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:39.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.898 issued rwts: total=79938,40590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.898 00:15:39.898 Run status group 0 (all jobs): 00:15:39.898 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=312MiB (327MB), run=6005-6005msec 00:15:39.898 WRITE: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=159MiB (166MB), run=5018-5018msec 00:15:39.898 00:15:39.898 Disk stats (read/write): 00:15:39.898 nvme0n1: ios=79298/39616, merge=0/0, ticks=487195/199516, in_queue=686711, util=98.56% 00:15:39.898 18:31:46 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:39.898 18:31:46 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:39.898 18:31:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:39.898 18:31:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:39.898 18:31:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.898 18:31:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:39.898 18:31:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.898 18:31:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:39.898 18:31:46 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.898 18:31:47 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:39.898 18:31:47 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:39.898 18:31:47 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:39.898 18:31:47 -- target/multipath.sh@144 -- # nvmftestfini 00:15:39.898 18:31:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.898 18:31:47 -- nvmf/common.sh@116 -- # sync 00:15:39.898 18:31:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:39.898 18:31:47 -- nvmf/common.sh@119 -- # set +e 00:15:39.898 18:31:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.898 18:31:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:39.898 rmmod nvme_tcp 00:15:39.898 rmmod nvme_fabrics 00:15:39.898 rmmod nvme_keyring 00:15:39.898 18:31:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.898 18:31:47 -- nvmf/common.sh@123 -- # set -e 00:15:39.898 18:31:47 -- nvmf/common.sh@124 -- # return 0 00:15:39.898 18:31:47 -- nvmf/common.sh@477 -- # '[' -n 85227 ']' 00:15:39.898 18:31:47 -- nvmf/common.sh@478 -- # killprocess 85227 00:15:39.898 18:31:47 -- common/autotest_common.sh@926 -- # '[' -z 85227 ']' 00:15:39.898 18:31:47 -- common/autotest_common.sh@930 -- # kill -0 85227 00:15:39.898 18:31:47 -- common/autotest_common.sh@931 -- # uname 00:15:39.898 18:31:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:39.898 18:31:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85227 00:15:39.898 killing process with pid 85227 00:15:39.898 18:31:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:39.898 18:31:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:39.898 18:31:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85227' 00:15:39.898 18:31:47 -- common/autotest_common.sh@945 -- # kill 85227 00:15:39.898 18:31:47 -- common/autotest_common.sh@950 -- # wait 85227 00:15:40.156 18:31:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.156 18:31:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.156 18:31:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.156 18:31:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.156 18:31:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.156 18:31:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.156 18:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.156 18:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.156 18:31:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:40.156 ************************************ 00:15:40.156 END TEST nvmf_multipath 00:15:40.156 ************************************ 00:15:40.156 00:15:40.156 real 0m20.239s 00:15:40.156 user 1m18.507s 00:15:40.156 sys 0m7.218s 00:15:40.156 18:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.156 18:31:47 -- common/autotest_common.sh@10 -- # set +x 00:15:40.415 18:31:47 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:40.415 18:31:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:40.415 18:31:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.415 18:31:47 -- common/autotest_common.sh@10 -- # set +x 00:15:40.415 ************************************ 00:15:40.415 START TEST nvmf_zcopy 00:15:40.415 ************************************ 00:15:40.415 18:31:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:40.415 * Looking for test storage... 00:15:40.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:40.415 18:31:47 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.415 18:31:47 -- nvmf/common.sh@7 -- # uname -s 00:15:40.415 18:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.415 18:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.415 18:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.415 18:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.415 18:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.415 18:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.415 18:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.415 18:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.415 18:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.415 18:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.415 18:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:15:40.415 18:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:15:40.415 18:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.415 18:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.415 18:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.415 18:31:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.416 18:31:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.416 18:31:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.416 18:31:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.416 18:31:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.416 18:31:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.416 18:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.416 18:31:47 -- paths/export.sh@5 -- # export PATH 00:15:40.416 18:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.416 18:31:47 -- nvmf/common.sh@46 -- # : 0 00:15:40.416 18:31:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.416 18:31:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.416 18:31:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.416 18:31:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.416 18:31:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.416 18:31:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.416 18:31:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.416 18:31:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.416 18:31:47 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:40.416 18:31:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:40.416 18:31:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.416 18:31:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.416 18:31:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.416 18:31:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.416 18:31:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.416 18:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.416 18:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.416 18:31:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:40.416 18:31:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:40.416 18:31:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:40.416 18:31:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:40.416 18:31:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:40.416 18:31:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:40.416 18:31:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.416 18:31:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.416 18:31:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.416 18:31:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:40.416 18:31:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.416 18:31:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.416 18:31:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.416 18:31:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.416 18:31:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.416 18:31:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.416 18:31:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.416 18:31:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.416 18:31:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:40.416 18:31:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:40.416 Cannot find device "nvmf_tgt_br" 00:15:40.416 18:31:47 -- nvmf/common.sh@154 -- # true 00:15:40.416 18:31:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.416 Cannot find device "nvmf_tgt_br2" 00:15:40.416 18:31:47 -- nvmf/common.sh@155 -- # true 00:15:40.416 18:31:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:40.416 18:31:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:40.416 Cannot find device "nvmf_tgt_br" 00:15:40.416 18:31:47 -- nvmf/common.sh@157 -- # true 00:15:40.416 18:31:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:40.416 Cannot find device "nvmf_tgt_br2" 00:15:40.416 18:31:47 -- nvmf/common.sh@158 -- # true 00:15:40.416 18:31:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:40.416 18:31:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:40.674 18:31:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.674 18:31:47 -- nvmf/common.sh@161 -- # true 00:15:40.674 18:31:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.674 18:31:47 -- nvmf/common.sh@162 -- # true 00:15:40.674 18:31:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.674 18:31:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.674 18:31:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.674 18:31:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.674 18:31:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.674 18:31:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.674 18:31:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.674 18:31:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.674 18:31:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.674 18:31:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:40.674 18:31:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:40.674 18:31:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:40.674 18:31:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:40.674 18:31:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.674 18:31:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.674 18:31:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.674 18:31:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:40.674 18:31:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:40.674 18:31:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.674 18:31:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.674 18:31:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.674 18:31:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.674 18:31:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.674 18:31:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:40.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:40.674 00:15:40.674 --- 10.0.0.2 ping statistics --- 00:15:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.674 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:40.674 18:31:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:40.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:40.674 00:15:40.674 --- 10.0.0.3 ping statistics --- 00:15:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.674 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:40.674 18:31:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:40.674 00:15:40.674 --- 10.0.0.1 ping statistics --- 00:15:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.674 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:40.674 18:31:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.674 18:31:48 -- nvmf/common.sh@421 -- # return 0 00:15:40.674 18:31:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:40.674 18:31:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.674 18:31:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:40.674 18:31:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:40.674 18:31:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.674 18:31:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:40.674 18:31:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:40.932 18:31:48 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:40.932 18:31:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:40.932 18:31:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:40.932 18:31:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.932 18:31:48 -- nvmf/common.sh@469 -- # nvmfpid=85817 00:15:40.932 18:31:48 -- nvmf/common.sh@470 -- # waitforlisten 85817 00:15:40.932 18:31:48 -- common/autotest_common.sh@819 -- # '[' -z 85817 ']' 00:15:40.932 18:31:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.932 18:31:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.932 18:31:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:40.932 18:31:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.932 18:31:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:40.932 18:31:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.932 [2024-07-14 18:31:48.170011] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:40.932 [2024-07-14 18:31:48.170086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.932 [2024-07-14 18:31:48.302437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.190 [2024-07-14 18:31:48.362468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.190 [2024-07-14 18:31:48.362647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.190 [2024-07-14 18:31:48.362660] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.190 [2024-07-14 18:31:48.362668] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.190 [2024-07-14 18:31:48.362698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.755 18:31:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:41.755 18:31:49 -- common/autotest_common.sh@852 -- # return 0 00:15:41.755 18:31:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:41.755 18:31:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:41.755 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.013 18:31:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.014 18:31:49 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:42.014 18:31:49 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 [2024-07-14 18:31:49.208695] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 [2024-07-14 18:31:49.224789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 malloc0 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:42.014 18:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.014 18:31:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 18:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.014 18:31:49 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:42.014 18:31:49 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:42.014 18:31:49 -- nvmf/common.sh@520 -- # config=() 00:15:42.014 18:31:49 -- nvmf/common.sh@520 -- # local subsystem config 00:15:42.014 18:31:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:42.014 18:31:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:42.014 { 00:15:42.014 "params": { 00:15:42.014 "name": "Nvme$subsystem", 00:15:42.014 "trtype": "$TEST_TRANSPORT", 00:15:42.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.014 "adrfam": "ipv4", 00:15:42.014 "trsvcid": "$NVMF_PORT", 00:15:42.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.014 "hdgst": ${hdgst:-false}, 00:15:42.014 "ddgst": ${ddgst:-false} 00:15:42.014 }, 00:15:42.014 "method": "bdev_nvme_attach_controller" 00:15:42.014 } 00:15:42.014 EOF 00:15:42.014 )") 00:15:42.014 18:31:49 -- nvmf/common.sh@542 -- # cat 00:15:42.014 18:31:49 -- nvmf/common.sh@544 -- # jq . 00:15:42.014 18:31:49 -- nvmf/common.sh@545 -- # IFS=, 00:15:42.014 18:31:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:42.014 "params": { 00:15:42.014 "name": "Nvme1", 00:15:42.014 "trtype": "tcp", 00:15:42.014 "traddr": "10.0.0.2", 00:15:42.014 "adrfam": "ipv4", 00:15:42.014 "trsvcid": "4420", 00:15:42.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.014 "hdgst": false, 00:15:42.014 "ddgst": false 00:15:42.014 }, 00:15:42.014 "method": "bdev_nvme_attach_controller" 00:15:42.014 }' 00:15:42.014 [2024-07-14 18:31:49.319272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:42.014 [2024-07-14 18:31:49.319357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85873 ] 00:15:42.272 [2024-07-14 18:31:49.462795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.272 [2024-07-14 18:31:49.527836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.530 Running I/O for 10 seconds... 00:15:52.512 00:15:52.512 Latency(us) 00:15:52.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.512 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:52.512 Verification LBA range: start 0x0 length 0x1000 00:15:52.512 Nvme1n1 : 10.01 10589.67 82.73 0.00 0.00 12056.93 1586.27 16086.11 00:15:52.512 =================================================================================================================== 00:15:52.512 Total : 10589.67 82.73 0.00 0.00 12056.93 1586.27 16086.11 00:15:52.512 18:31:59 -- target/zcopy.sh@39 -- # perfpid=85984 00:15:52.512 18:31:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:52.512 18:31:59 -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 18:31:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:52.512 18:31:59 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:52.512 18:31:59 -- nvmf/common.sh@520 -- # config=() 00:15:52.512 18:31:59 -- nvmf/common.sh@520 -- # local subsystem config 00:15:52.512 18:31:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:52.512 18:31:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:52.512 { 00:15:52.512 "params": { 00:15:52.512 "name": "Nvme$subsystem", 00:15:52.512 "trtype": "$TEST_TRANSPORT", 00:15:52.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.512 "adrfam": "ipv4", 00:15:52.512 "trsvcid": "$NVMF_PORT", 00:15:52.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.512 "hdgst": ${hdgst:-false}, 00:15:52.512 "ddgst": ${ddgst:-false} 00:15:52.512 }, 00:15:52.512 "method": "bdev_nvme_attach_controller" 00:15:52.512 } 00:15:52.512 EOF 00:15:52.512 )") 00:15:52.512 18:31:59 -- nvmf/common.sh@542 -- # cat 00:15:52.512 [2024-07-14 18:31:59.914295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.512 [2024-07-14 18:31:59.914337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.512 18:31:59 -- nvmf/common.sh@544 -- # jq . 00:15:52.512 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.512 18:31:59 -- nvmf/common.sh@545 -- # IFS=, 00:15:52.512 18:31:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:52.512 "params": { 00:15:52.512 "name": "Nvme1", 00:15:52.512 "trtype": "tcp", 00:15:52.512 "traddr": "10.0.0.2", 00:15:52.512 "adrfam": "ipv4", 00:15:52.512 "trsvcid": "4420", 00:15:52.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.512 "hdgst": false, 00:15:52.512 "ddgst": false 00:15:52.512 }, 00:15:52.512 "method": "bdev_nvme_attach_controller" 00:15:52.512 }' 00:15:52.512 [2024-07-14 18:31:59.926260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.512 [2024-07-14 18:31:59.926290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.512 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.938289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.938317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.950269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.950295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.962271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.962296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 [2024-07-14 18:31:59.962685] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:52.772 [2024-07-14 18:31:59.962759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85984 ] 00:15:52.772 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.974274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.974298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.986276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.986300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:31:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:31:59.998279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:31:59.998303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.010284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.010310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.022285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.022310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.034288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.034312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.046293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.046318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.058309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.058336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.070299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.070323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.082313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.082337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.094297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.094325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.772 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.772 [2024-07-14 18:32:00.102683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.772 [2024-07-14 18:32:00.106298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.772 [2024-07-14 18:32:00.106323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.118310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.118338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.130315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.130342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.142328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.142359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.154322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.154348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.165431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.773 [2024-07-14 18:32:00.166362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.166388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.178349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.178375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.773 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.773 [2024-07-14 18:32:00.190384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.773 [2024-07-14 18:32:00.190415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.202419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.202457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.214386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.214415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.226378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.226406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.238368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.238393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.250386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.250412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.262405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.262434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.274401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.274429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.286395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.286421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.298424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.298469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.310410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.310437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.322488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.032 [2024-07-14 18:32:00.322544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.032 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.032 [2024-07-14 18:32:00.334414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.334440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 Running I/O for 5 seconds... 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.352867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.352916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.367372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.367405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.384245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.384291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.399952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.399988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.411434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.411466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.426122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.426155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.441194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.441225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.033 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.033 [2024-07-14 18:32:00.453400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.033 [2024-07-14 18:32:00.453432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.469059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.469091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.486006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.486037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.502721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.502769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.518545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.518575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.535528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.535556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.292 [2024-07-14 18:32:00.551903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.292 [2024-07-14 18:32:00.551935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.292 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.568496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.568555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.580232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.580263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.595703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.595736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.612271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.612304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.629638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.629668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.646045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.646076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.662977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.663008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.679245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.679277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.696348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.696379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.293 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.293 [2024-07-14 18:32:00.711378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.293 [2024-07-14 18:32:00.711410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.728335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.728366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.745627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.745659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.761132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.761162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.777971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.778003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.794572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.794604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.811738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.811775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.828445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.828478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.844755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.844786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.861699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.861747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.878667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.878698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.895880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.895916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.911250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.911282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.920529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.920590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.933943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.933973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.949441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.949472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.553 [2024-07-14 18:32:00.966318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.553 [2024-07-14 18:32:00.966349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.553 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:00.981610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:00.981641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:00.997821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:00.997854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.013172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.013203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.029096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.029127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.046090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.046121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.062545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.062576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.079606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.079637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.096983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.097015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.113325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.113356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.130307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.130339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.145596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.145628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.161041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.161072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.177863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.177909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.194779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.194811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.210328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.210411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.220695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.220759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.813 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.813 [2024-07-14 18:32:01.231562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.813 [2024-07-14 18:32:01.231598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.072 [2024-07-14 18:32:01.248961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.072 [2024-07-14 18:32:01.248992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.072 [2024-07-14 18:32:01.265943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.072 [2024-07-14 18:32:01.265974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.072 [2024-07-14 18:32:01.282576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.072 [2024-07-14 18:32:01.282612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.072 [2024-07-14 18:32:01.298682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.072 [2024-07-14 18:32:01.298715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.072 [2024-07-14 18:32:01.315716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.072 [2024-07-14 18:32:01.315750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.072 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.331927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.331960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.348196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.348226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.365335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.365366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.381150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.381183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.397775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.397812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.414761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.414797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.431268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.431301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.447746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.447781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.465114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.465146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.073 [2024-07-14 18:32:01.480778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.073 [2024-07-14 18:32:01.480840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.073 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.499370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.499402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.513251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.513284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.529468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.529527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.544640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.544672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.560893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.560924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.577528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.577557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.594000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.594030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.611105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.611135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.628375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.628408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.642992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.643023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.658065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.658096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.670393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.670424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.684795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.684837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.700088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.700135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.710975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.711007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.726791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.726840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.332 [2024-07-14 18:32:01.743983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.332 [2024-07-14 18:32:01.744048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.332 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.759942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.759978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.776618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.776650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.792738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.792770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.810384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.810416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.825669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.825700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.836663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.836696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.852743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.852774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.863592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.863620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.878639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.878669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.896227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.896259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.911035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.911067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.925439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.925471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.942455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.942516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.957689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.957720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.969054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.969086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:01.985310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:01.985359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.590 [2024-07-14 18:32:02.001033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.590 [2024-07-14 18:32:02.001063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.590 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.017516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.017591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.034129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.034161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.049576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.049606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.061649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.061679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.075964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.076016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.091974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.092039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.108930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.108960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.125330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.125362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.141094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.141125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.158123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.158155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.173941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.173972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.190916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.190948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.207235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.207267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.224668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.224715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.238843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.238876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.848 [2024-07-14 18:32:02.254851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.848 [2024-07-14 18:32:02.254890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.848 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.106 [2024-07-14 18:32:02.272677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.272708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.286115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.286148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.301679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.301710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.317662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.317692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.334377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.334408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.351564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.351594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.368065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.368098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.384350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.384382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.401775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.401810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.417968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.417999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.429090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.429119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.445314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.445345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.461272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.461304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.478309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.478342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.494513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.494557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.511404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.511436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.107 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.107 [2024-07-14 18:32:02.528220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.107 [2024-07-14 18:32:02.528252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.543327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.543358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.558496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.558556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.576984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.577017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.591982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.592061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.608464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.608496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.619956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.620004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.636812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.636844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.652985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.653018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.669392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.669426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.685990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.686024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.703147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.703179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.719144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.719175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.736554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.736591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.751800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.751835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.769155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.769187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.366 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.366 [2024-07-14 18:32:02.785277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.366 [2024-07-14 18:32:02.785309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.801574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.801606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.818732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.818764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.833184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.833215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.848012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.848062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.857149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.857179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.873162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.873194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.890133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.890164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.625 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.625 [2024-07-14 18:32:02.907364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.625 [2024-07-14 18:32:02.907394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:02.924403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:02.924434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:02.935561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:02.935593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:02.951580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:02.951610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:02.967892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:02.967941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:02.983661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:02.983731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:03.001204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:03.001238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:03.016137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:03.016168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:03.027046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:03.027078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.626 [2024-07-14 18:32:03.043228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.626 [2024-07-14 18:32:03.043267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.626 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.059100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.059131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.077156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.077188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.093927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.093992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.109674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.109721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.125251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.125282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.136763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.136795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.152664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.152694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.163187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.163218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.178436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.178468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.188813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.188848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.202221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.202252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.218285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.218316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.235432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.235463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.252349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.252399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.267512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.267576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.284104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.284151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.885 [2024-07-14 18:32:03.299942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.885 [2024-07-14 18:32:03.299978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.885 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.316574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.316604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.334021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.334053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.349768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.349800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.360629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.360660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.374183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.374213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.389542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.389587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.406139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.406175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.423388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.423420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.438390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.438421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.454308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.454339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.471569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.471600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.487909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.487945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.504870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.504903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.521978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.522009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.537338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.537371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.145 [2024-07-14 18:32:03.552886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.145 [2024-07-14 18:32:03.552917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.145 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.571030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.571077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.585534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.585596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.596189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.596221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.612196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.612229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.628675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.628706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.647186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.647218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.661946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.661977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.677187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.677218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.693527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.693557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.404 [2024-07-14 18:32:03.710359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.404 [2024-07-14 18:32:03.710391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.404 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.727588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.727635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.743784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.743820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.760391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.760418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.776463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.776521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.793605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.793637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.808177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.808211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.405 [2024-07-14 18:32:03.816748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.405 [2024-07-14 18:32:03.816780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.405 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.830638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.830683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.845024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.845057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.860869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.860902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.877480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.877539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.893602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.893635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.909607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.909641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.927323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.927354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.942674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.942705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.957848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.957881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.974273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.974304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:03.991204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:03.991238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:04.008105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:04.008140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:04.024332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:04.024363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:04.041553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:04.041584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:04.057540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:04.057571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.664 [2024-07-14 18:32:04.074679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.664 [2024-07-14 18:32:04.074710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.664 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.091310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.091341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.108809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.108841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.124209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.124242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.139298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.139330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.150101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.150133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.166005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.166038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.182930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.182962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.200458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.200519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.216270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.216332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.232347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.232380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.249427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.249458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.926 [2024-07-14 18:32:04.267066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.926 [2024-07-14 18:32:04.267099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.926 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.927 [2024-07-14 18:32:04.282681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.927 [2024-07-14 18:32:04.282776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.927 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.927 [2024-07-14 18:32:04.300303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.927 [2024-07-14 18:32:04.300350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.927 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.927 [2024-07-14 18:32:04.315700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.927 [2024-07-14 18:32:04.315734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.927 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.927 [2024-07-14 18:32:04.329926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.927 [2024-07-14 18:32:04.329956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.927 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.927 [2024-07-14 18:32:04.346382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.927 [2024-07-14 18:32:04.346414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.361938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.361968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.373756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.373788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.389341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.389373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.405081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.405113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.419738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.419773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.430451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.430483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.446735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.446770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.461696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.461732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.470431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.470463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.485425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.485456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.501415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.501446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.518208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.186 [2024-07-14 18:32:04.518239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.186 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.186 [2024-07-14 18:32:04.533724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.187 [2024-07-14 18:32:04.533756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.187 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.187 [2024-07-14 18:32:04.549089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.187 [2024-07-14 18:32:04.549120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.187 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.187 [2024-07-14 18:32:04.566053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.187 [2024-07-14 18:32:04.566083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.187 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.187 [2024-07-14 18:32:04.583137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.187 [2024-07-14 18:32:04.583169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.187 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.187 [2024-07-14 18:32:04.599338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.187 [2024-07-14 18:32:04.599369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.187 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.614872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.614919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.632394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.632427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.649212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.649243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.665320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.665352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.681898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.681930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.698371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.698405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.714386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.714418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.725932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.725963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.742862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.742894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.757325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.757355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.773000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.773030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.789157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.789190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.806512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.446 [2024-07-14 18:32:04.806543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.446 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.446 [2024-07-14 18:32:04.822563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.447 [2024-07-14 18:32:04.822593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.447 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.447 [2024-07-14 18:32:04.838269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.447 [2024-07-14 18:32:04.838300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.447 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.447 [2024-07-14 18:32:04.850216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.447 [2024-07-14 18:32:04.850247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.447 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.447 [2024-07-14 18:32:04.865563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.447 [2024-07-14 18:32:04.865595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.883126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.883158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.898577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.898606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.909732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.909764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.926055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.926087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.943115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.943147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.959704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.959739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.975875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.975910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:04.994129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:04.994165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.706 2024/07/14 18:32:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.706 [2024-07-14 18:32:05.009122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.706 [2024-07-14 18:32:05.009155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.024320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.024350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.042074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.042107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.056832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.056879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.072110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.072141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.083326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.083358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.100389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.100421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.707 [2024-07-14 18:32:05.115036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.707 [2024-07-14 18:32:05.115067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.707 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.966 [2024-07-14 18:32:05.130495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.966 [2024-07-14 18:32:05.130536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.147241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.147272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.164628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.164658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.181209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.181240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.198056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.198087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.213471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.213547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.223618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.223654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.240285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.240320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.254904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.254936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.271132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.271164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.286896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.286929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.305793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.305833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.323097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.323132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.338463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.338522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 00:15:57.967 Latency(us) 00:15:57.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.967 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:57.967 Nvme1n1 : 5.01 13004.23 101.60 0.00 0.00 9831.99 3991.74 22163.08 00:15:57.967 =================================================================================================================== 00:15:57.967 Total : 13004.23 101.60 0.00 0.00 9831.99 3991.74 22163.08 00:15:57.967 [2024-07-14 18:32:05.350310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.350354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.362318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.362362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.374326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.374374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.967 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.967 [2024-07-14 18:32:05.386383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.967 [2024-07-14 18:32:05.386464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.398375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.398427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.410352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.410409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.422349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.422407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.434348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.434396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.446352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.446411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.458358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.458413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.470364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.470410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.482377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.482421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.494356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.494398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.506374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.506425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.518361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.518403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.530376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.530429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.542398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.542448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 [2024-07-14 18:32:05.554362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.226 [2024-07-14 18:32:05.554402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.226 2024/07/14 18:32:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.226 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85984) - No such process 00:15:58.226 18:32:05 -- target/zcopy.sh@49 -- # wait 85984 00:15:58.226 18:32:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.226 18:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.226 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.226 18:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.226 18:32:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:58.227 18:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.227 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.227 delay0 00:15:58.227 18:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.227 18:32:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:58.227 18:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.227 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.227 18:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.227 18:32:05 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:58.502 [2024-07-14 18:32:05.747666] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:05.068 Initializing NVMe Controllers 00:16:05.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:05.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:05.068 Initialization complete. Launching workers. 00:16:05.068 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 97 00:16:05.068 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 384, failed to submit 33 00:16:05.068 success 219, unsuccess 165, failed 0 00:16:05.068 18:32:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:05.068 18:32:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:05.068 18:32:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.068 18:32:11 -- nvmf/common.sh@116 -- # sync 00:16:05.068 18:32:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.068 18:32:11 -- nvmf/common.sh@119 -- # set +e 00:16:05.068 18:32:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.068 18:32:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.068 rmmod nvme_tcp 00:16:05.068 rmmod nvme_fabrics 00:16:05.068 rmmod nvme_keyring 00:16:05.068 18:32:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.068 18:32:11 -- nvmf/common.sh@123 -- # set -e 00:16:05.068 18:32:11 -- nvmf/common.sh@124 -- # return 0 00:16:05.068 18:32:11 -- nvmf/common.sh@477 -- # '[' -n 85817 ']' 00:16:05.068 18:32:11 -- nvmf/common.sh@478 -- # killprocess 85817 00:16:05.068 18:32:11 -- common/autotest_common.sh@926 -- # '[' -z 85817 ']' 00:16:05.068 18:32:11 -- common/autotest_common.sh@930 -- # kill -0 85817 00:16:05.068 18:32:11 -- common/autotest_common.sh@931 -- # uname 00:16:05.068 18:32:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.068 18:32:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85817 00:16:05.068 killing process with pid 85817 00:16:05.068 18:32:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:05.068 18:32:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:05.068 18:32:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85817' 00:16:05.068 18:32:11 -- common/autotest_common.sh@945 -- # kill 85817 00:16:05.068 18:32:11 -- common/autotest_common.sh@950 -- # wait 85817 00:16:05.068 18:32:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:05.068 18:32:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:05.068 18:32:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:05.068 18:32:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.068 18:32:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:05.068 18:32:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.068 18:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.068 18:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.068 18:32:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:05.068 00:16:05.068 real 0m24.554s 00:16:05.068 user 0m39.563s 00:16:05.068 sys 0m6.557s 00:16:05.068 18:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.068 ************************************ 00:16:05.068 18:32:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.068 END TEST nvmf_zcopy 00:16:05.068 ************************************ 00:16:05.068 18:32:12 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:05.068 18:32:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.068 18:32:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.068 18:32:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.068 ************************************ 00:16:05.068 START TEST nvmf_nmic 00:16:05.068 ************************************ 00:16:05.068 18:32:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:05.068 * Looking for test storage... 00:16:05.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:05.068 18:32:12 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.068 18:32:12 -- nvmf/common.sh@7 -- # uname -s 00:16:05.068 18:32:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.068 18:32:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.068 18:32:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.068 18:32:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.068 18:32:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.068 18:32:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.068 18:32:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.068 18:32:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.068 18:32:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.068 18:32:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.068 18:32:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:16:05.068 18:32:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:16:05.068 18:32:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.068 18:32:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.068 18:32:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.068 18:32:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.068 18:32:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.068 18:32:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.069 18:32:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.069 18:32:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.069 18:32:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.069 18:32:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.069 18:32:12 -- paths/export.sh@5 -- # export PATH 00:16:05.069 18:32:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.069 18:32:12 -- nvmf/common.sh@46 -- # : 0 00:16:05.069 18:32:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.069 18:32:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.069 18:32:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.069 18:32:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.069 18:32:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.069 18:32:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.069 18:32:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.069 18:32:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.069 18:32:12 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.069 18:32:12 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.069 18:32:12 -- target/nmic.sh@14 -- # nvmftestinit 00:16:05.069 18:32:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:05.069 18:32:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.069 18:32:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:05.069 18:32:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:05.069 18:32:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:05.069 18:32:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.069 18:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.069 18:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.069 18:32:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:05.069 18:32:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:05.069 18:32:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:05.069 18:32:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:05.069 18:32:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:05.069 18:32:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:05.069 18:32:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.069 18:32:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.069 18:32:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:05.069 18:32:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:05.069 18:32:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.069 18:32:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.069 18:32:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.069 18:32:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.069 18:32:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.069 18:32:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.069 18:32:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.069 18:32:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.069 18:32:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:05.069 18:32:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:05.069 Cannot find device "nvmf_tgt_br" 00:16:05.069 18:32:12 -- nvmf/common.sh@154 -- # true 00:16:05.069 18:32:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.069 Cannot find device "nvmf_tgt_br2" 00:16:05.069 18:32:12 -- nvmf/common.sh@155 -- # true 00:16:05.069 18:32:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:05.069 18:32:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:05.069 Cannot find device "nvmf_tgt_br" 00:16:05.069 18:32:12 -- nvmf/common.sh@157 -- # true 00:16:05.069 18:32:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:05.069 Cannot find device "nvmf_tgt_br2" 00:16:05.069 18:32:12 -- nvmf/common.sh@158 -- # true 00:16:05.069 18:32:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:05.069 18:32:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:05.069 18:32:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.329 18:32:12 -- nvmf/common.sh@161 -- # true 00:16:05.329 18:32:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.329 18:32:12 -- nvmf/common.sh@162 -- # true 00:16:05.329 18:32:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.329 18:32:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.329 18:32:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.329 18:32:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.329 18:32:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.329 18:32:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.329 18:32:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.329 18:32:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:05.329 18:32:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:05.329 18:32:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:05.329 18:32:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:05.329 18:32:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:05.329 18:32:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:05.329 18:32:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.329 18:32:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.329 18:32:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.329 18:32:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:05.329 18:32:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:05.329 18:32:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.329 18:32:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.329 18:32:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.329 18:32:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.329 18:32:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.329 18:32:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:05.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:16:05.329 00:16:05.329 --- 10.0.0.2 ping statistics --- 00:16:05.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.329 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:16:05.329 18:32:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:05.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:05.329 00:16:05.329 --- 10.0.0.3 ping statistics --- 00:16:05.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.329 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:05.329 18:32:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:05.329 00:16:05.329 --- 10.0.0.1 ping statistics --- 00:16:05.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.329 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:05.329 18:32:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.329 18:32:12 -- nvmf/common.sh@421 -- # return 0 00:16:05.329 18:32:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:05.329 18:32:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.329 18:32:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:05.329 18:32:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:05.329 18:32:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.329 18:32:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:05.329 18:32:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:05.329 18:32:12 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:05.329 18:32:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:05.329 18:32:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:05.329 18:32:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.329 18:32:12 -- nvmf/common.sh@469 -- # nvmfpid=86303 00:16:05.329 18:32:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:05.329 18:32:12 -- nvmf/common.sh@470 -- # waitforlisten 86303 00:16:05.329 18:32:12 -- common/autotest_common.sh@819 -- # '[' -z 86303 ']' 00:16:05.329 18:32:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.329 18:32:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.329 18:32:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.329 18:32:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.329 18:32:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.329 [2024-07-14 18:32:12.742464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:05.329 [2024-07-14 18:32:12.742585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.588 [2024-07-14 18:32:12.881632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.588 [2024-07-14 18:32:12.981331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.588 [2024-07-14 18:32:12.981643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.588 [2024-07-14 18:32:12.981755] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.588 [2024-07-14 18:32:12.981887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.588 [2024-07-14 18:32:12.982133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.588 [2024-07-14 18:32:12.982256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.588 [2024-07-14 18:32:12.982779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.588 [2024-07-14 18:32:12.982786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.522 18:32:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.522 18:32:13 -- common/autotest_common.sh@852 -- # return 0 00:16:06.522 18:32:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:06.522 18:32:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:06.522 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.522 18:32:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.522 18:32:13 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.522 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.522 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.522 [2024-07-14 18:32:13.781298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 Malloc0 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 [2024-07-14 18:32:13.851632] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:06.523 test case1: single bdev can't be used in multiple subsystems 00:16:06.523 18:32:13 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@28 -- # nmic_status=0 00:16:06.523 18:32:13 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 [2024-07-14 18:32:13.875489] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:06.523 [2024-07-14 18:32:13.875553] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:06.523 [2024-07-14 18:32:13.875565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.523 2024/07/14 18:32:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.523 request: 00:16:06.523 { 00:16:06.523 "method": "nvmf_subsystem_add_ns", 00:16:06.523 "params": { 00:16:06.523 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:06.523 "namespace": { 00:16:06.523 "bdev_name": "Malloc0" 00:16:06.523 } 00:16:06.523 } 00:16:06.523 } 00:16:06.523 Got JSON-RPC error response 00:16:06.523 GoRPCClient: error on JSON-RPC call 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@29 -- # nmic_status=1 00:16:06.523 18:32:13 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:06.523 18:32:13 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:06.523 Adding namespace failed - expected result. 00:16:06.523 test case2: host connect to nvmf target in multiple paths 00:16:06.523 18:32:13 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:06.523 18:32:13 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:06.523 18:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.523 18:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.523 [2024-07-14 18:32:13.887735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:06.523 18:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.523 18:32:13 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.781 18:32:14 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:07.039 18:32:14 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.039 18:32:14 -- common/autotest_common.sh@1177 -- # local i=0 00:16:07.039 18:32:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.039 18:32:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:07.039 18:32:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:08.965 18:32:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:08.965 18:32:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:08.965 18:32:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.965 18:32:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:08.965 18:32:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.965 18:32:16 -- common/autotest_common.sh@1187 -- # return 0 00:16:08.965 18:32:16 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:08.965 [global] 00:16:08.965 thread=1 00:16:08.965 invalidate=1 00:16:08.965 rw=write 00:16:08.965 time_based=1 00:16:08.965 runtime=1 00:16:08.965 ioengine=libaio 00:16:08.965 direct=1 00:16:08.965 bs=4096 00:16:08.965 iodepth=1 00:16:08.965 norandommap=0 00:16:08.965 numjobs=1 00:16:08.965 00:16:08.965 verify_dump=1 00:16:08.965 verify_backlog=512 00:16:08.965 verify_state_save=0 00:16:08.965 do_verify=1 00:16:08.965 verify=crc32c-intel 00:16:08.965 [job0] 00:16:08.965 filename=/dev/nvme0n1 00:16:08.965 Could not set queue depth (nvme0n1) 00:16:09.223 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.223 fio-3.35 00:16:09.223 Starting 1 thread 00:16:10.158 00:16:10.158 job0: (groupid=0, jobs=1): err= 0: pid=86411: Sun Jul 14 18:32:17 2024 00:16:10.158 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:10.158 slat (nsec): min=12650, max=67975, avg=17303.98, stdev=6538.41 00:16:10.158 clat (usec): min=116, max=308, avg=153.53, stdev=22.95 00:16:10.158 lat (usec): min=129, max=338, avg=170.84, stdev=25.30 00:16:10.158 clat percentiles (usec): 00:16:10.158 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 135], 00:16:10.158 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:16:10.158 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 196], 00:16:10.158 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 269], 99.95th=[ 297], 00:16:10.158 | 99.99th=[ 310] 00:16:10.158 write: IOPS=3362, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec); 0 zone resets 00:16:10.158 slat (usec): min=18, max=150, avg=25.76, stdev= 9.48 00:16:10.158 clat (usec): min=83, max=851, avg=111.43, stdev=25.36 00:16:10.158 lat (usec): min=102, max=884, avg=137.19, stdev=28.84 00:16:10.158 clat percentiles (usec): 00:16:10.158 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 96], 00:16:10.158 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 106], 60.00th=[ 111], 00:16:10.158 | 70.00th=[ 117], 80.00th=[ 126], 90.00th=[ 139], 95.00th=[ 147], 00:16:10.158 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 359], 99.95th=[ 553], 00:16:10.158 | 99.99th=[ 848] 00:16:10.158 bw ( KiB/s): min=13264, max=13264, per=98.61%, avg=13264.00, stdev= 0.00, samples=1 00:16:10.158 iops : min= 3316, max= 3316, avg=3316.00, stdev= 0.00, samples=1 00:16:10.158 lat (usec) : 100=18.16%, 250=81.62%, 500=0.19%, 750=0.02%, 1000=0.02% 00:16:10.158 cpu : usr=3.00%, sys=10.00%, ctx=6438, majf=0, minf=2 00:16:10.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.158 issued rwts: total=3072,3366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.158 00:16:10.158 Run status group 0 (all jobs): 00:16:10.158 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:10.158 WRITE: bw=13.1MiB/s (13.8MB/s), 13.1MiB/s-13.1MiB/s (13.8MB/s-13.8MB/s), io=13.1MiB (13.8MB), run=1001-1001msec 00:16:10.158 00:16:10.158 Disk stats (read/write): 00:16:10.158 nvme0n1: ios=2838/3072, merge=0/0, ticks=468/386, in_queue=854, util=91.38% 00:16:10.158 18:32:17 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:10.416 18:32:17 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.416 18:32:17 -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.416 18:32:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:10.416 18:32:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.416 18:32:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:10.416 18:32:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.416 18:32:17 -- common/autotest_common.sh@1210 -- # return 0 00:16:10.416 18:32:17 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:10.416 18:32:17 -- target/nmic.sh@53 -- # nvmftestfini 00:16:10.416 18:32:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.416 18:32:17 -- nvmf/common.sh@116 -- # sync 00:16:10.416 18:32:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.416 18:32:17 -- nvmf/common.sh@119 -- # set +e 00:16:10.416 18:32:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.416 18:32:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.416 rmmod nvme_tcp 00:16:10.416 rmmod nvme_fabrics 00:16:10.416 rmmod nvme_keyring 00:16:10.416 18:32:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:10.416 18:32:17 -- nvmf/common.sh@123 -- # set -e 00:16:10.416 18:32:17 -- nvmf/common.sh@124 -- # return 0 00:16:10.416 18:32:17 -- nvmf/common.sh@477 -- # '[' -n 86303 ']' 00:16:10.416 18:32:17 -- nvmf/common.sh@478 -- # killprocess 86303 00:16:10.416 18:32:17 -- common/autotest_common.sh@926 -- # '[' -z 86303 ']' 00:16:10.416 18:32:17 -- common/autotest_common.sh@930 -- # kill -0 86303 00:16:10.416 18:32:17 -- common/autotest_common.sh@931 -- # uname 00:16:10.674 18:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.674 18:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86303 00:16:10.674 killing process with pid 86303 00:16:10.674 18:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.674 18:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.674 18:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86303' 00:16:10.674 18:32:17 -- common/autotest_common.sh@945 -- # kill 86303 00:16:10.674 18:32:17 -- common/autotest_common.sh@950 -- # wait 86303 00:16:10.934 18:32:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:10.934 18:32:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:10.934 18:32:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:10.934 18:32:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.934 18:32:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.934 18:32:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.934 18:32:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:10.934 00:16:10.934 real 0m5.902s 00:16:10.934 user 0m20.158s 00:16:10.934 sys 0m1.225s 00:16:10.934 18:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.934 18:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:10.934 ************************************ 00:16:10.934 END TEST nvmf_nmic 00:16:10.934 ************************************ 00:16:10.934 18:32:18 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:10.934 18:32:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:10.934 18:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.934 18:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:10.934 ************************************ 00:16:10.934 START TEST nvmf_fio_target 00:16:10.934 ************************************ 00:16:10.934 18:32:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:10.934 * Looking for test storage... 00:16:10.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.934 18:32:18 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.934 18:32:18 -- nvmf/common.sh@7 -- # uname -s 00:16:10.934 18:32:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.934 18:32:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.934 18:32:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.934 18:32:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.934 18:32:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.934 18:32:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.934 18:32:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.934 18:32:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.934 18:32:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.934 18:32:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:16:10.934 18:32:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:16:10.934 18:32:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.934 18:32:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.934 18:32:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.934 18:32:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.934 18:32:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.934 18:32:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.934 18:32:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.934 18:32:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.934 18:32:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.934 18:32:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.934 18:32:18 -- paths/export.sh@5 -- # export PATH 00:16:10.934 18:32:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.934 18:32:18 -- nvmf/common.sh@46 -- # : 0 00:16:10.934 18:32:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.934 18:32:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.934 18:32:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.934 18:32:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.934 18:32:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.934 18:32:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.934 18:32:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.934 18:32:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.934 18:32:18 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.934 18:32:18 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.934 18:32:18 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.934 18:32:18 -- target/fio.sh@16 -- # nvmftestinit 00:16:10.934 18:32:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:10.934 18:32:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.934 18:32:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:10.934 18:32:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:10.934 18:32:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:10.934 18:32:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.934 18:32:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.934 18:32:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.934 18:32:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:10.934 18:32:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:10.934 18:32:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.934 18:32:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.934 18:32:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.934 18:32:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:10.934 18:32:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.934 18:32:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.934 18:32:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.934 18:32:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.934 18:32:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.934 18:32:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.934 18:32:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.934 18:32:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.934 18:32:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:10.934 18:32:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:10.934 Cannot find device "nvmf_tgt_br" 00:16:10.934 18:32:18 -- nvmf/common.sh@154 -- # true 00:16:10.934 18:32:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.934 Cannot find device "nvmf_tgt_br2" 00:16:10.934 18:32:18 -- nvmf/common.sh@155 -- # true 00:16:10.934 18:32:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:10.934 18:32:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:10.934 Cannot find device "nvmf_tgt_br" 00:16:10.934 18:32:18 -- nvmf/common.sh@157 -- # true 00:16:10.934 18:32:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.193 Cannot find device "nvmf_tgt_br2" 00:16:11.193 18:32:18 -- nvmf/common.sh@158 -- # true 00:16:11.193 18:32:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.193 18:32:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.193 18:32:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.193 18:32:18 -- nvmf/common.sh@161 -- # true 00:16:11.193 18:32:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.193 18:32:18 -- nvmf/common.sh@162 -- # true 00:16:11.193 18:32:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.193 18:32:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.193 18:32:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.193 18:32:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.193 18:32:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.193 18:32:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.193 18:32:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.193 18:32:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.193 18:32:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.193 18:32:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.193 18:32:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.193 18:32:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.193 18:32:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.193 18:32:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.193 18:32:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.193 18:32:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.193 18:32:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.193 18:32:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.193 18:32:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.193 18:32:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.193 18:32:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.193 18:32:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.193 18:32:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.452 18:32:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:11.452 00:16:11.452 --- 10.0.0.2 ping statistics --- 00:16:11.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.452 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:11.452 18:32:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:11.452 00:16:11.452 --- 10.0.0.3 ping statistics --- 00:16:11.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.452 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:11.452 18:32:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:16:11.452 00:16:11.452 --- 10.0.0.1 ping statistics --- 00:16:11.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.452 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:11.452 18:32:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.452 18:32:18 -- nvmf/common.sh@421 -- # return 0 00:16:11.452 18:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.452 18:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.452 18:32:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.452 18:32:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.452 18:32:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.452 18:32:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.452 18:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.452 18:32:18 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:11.452 18:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.452 18:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:11.452 18:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.452 18:32:18 -- nvmf/common.sh@469 -- # nvmfpid=86588 00:16:11.452 18:32:18 -- nvmf/common.sh@470 -- # waitforlisten 86588 00:16:11.452 18:32:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.452 18:32:18 -- common/autotest_common.sh@819 -- # '[' -z 86588 ']' 00:16:11.452 18:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.452 18:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.452 18:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.452 18:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.452 18:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.452 [2024-07-14 18:32:18.708803] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:11.452 [2024-07-14 18:32:18.708901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.452 [2024-07-14 18:32:18.850420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.712 [2024-07-14 18:32:18.944800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.712 [2024-07-14 18:32:18.944963] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.712 [2024-07-14 18:32:18.944976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.712 [2024-07-14 18:32:18.944985] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.712 [2024-07-14 18:32:18.945875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.712 [2024-07-14 18:32:18.946044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.712 [2024-07-14 18:32:18.946220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.712 [2024-07-14 18:32:18.946233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.279 18:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.279 18:32:19 -- common/autotest_common.sh@852 -- # return 0 00:16:12.279 18:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.279 18:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:12.279 18:32:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.279 18:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.279 18:32:19 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.538 [2024-07-14 18:32:19.892620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.538 18:32:19 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.105 18:32:20 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:13.105 18:32:20 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.105 18:32:20 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:13.105 18:32:20 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.673 18:32:20 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:13.673 18:32:20 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.932 18:32:21 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:13.932 18:32:21 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:13.932 18:32:21 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.191 18:32:21 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:14.191 18:32:21 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.758 18:32:21 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:14.758 18:32:21 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.018 18:32:22 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:15.018 18:32:22 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:15.018 18:32:22 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:15.277 18:32:22 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:15.277 18:32:22 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.536 18:32:22 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:15.536 18:32:22 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:15.795 18:32:23 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.053 [2024-07-14 18:32:23.271813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.053 18:32:23 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:16.311 18:32:23 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:16.311 18:32:23 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.570 18:32:23 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:16.570 18:32:23 -- common/autotest_common.sh@1177 -- # local i=0 00:16:16.570 18:32:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.570 18:32:23 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:16.570 18:32:23 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:16.570 18:32:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:18.542 18:32:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:18.542 18:32:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:18.542 18:32:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.542 18:32:25 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:18.542 18:32:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.542 18:32:25 -- common/autotest_common.sh@1187 -- # return 0 00:16:18.542 18:32:25 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:18.542 [global] 00:16:18.542 thread=1 00:16:18.542 invalidate=1 00:16:18.542 rw=write 00:16:18.542 time_based=1 00:16:18.542 runtime=1 00:16:18.542 ioengine=libaio 00:16:18.542 direct=1 00:16:18.542 bs=4096 00:16:18.542 iodepth=1 00:16:18.542 norandommap=0 00:16:18.542 numjobs=1 00:16:18.542 00:16:18.542 verify_dump=1 00:16:18.542 verify_backlog=512 00:16:18.542 verify_state_save=0 00:16:18.542 do_verify=1 00:16:18.542 verify=crc32c-intel 00:16:18.542 [job0] 00:16:18.542 filename=/dev/nvme0n1 00:16:18.542 [job1] 00:16:18.542 filename=/dev/nvme0n2 00:16:18.542 [job2] 00:16:18.542 filename=/dev/nvme0n3 00:16:18.542 [job3] 00:16:18.542 filename=/dev/nvme0n4 00:16:18.801 Could not set queue depth (nvme0n1) 00:16:18.801 Could not set queue depth (nvme0n2) 00:16:18.801 Could not set queue depth (nvme0n3) 00:16:18.801 Could not set queue depth (nvme0n4) 00:16:18.801 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.801 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.801 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.801 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.801 fio-3.35 00:16:18.801 Starting 4 threads 00:16:20.177 00:16:20.177 job0: (groupid=0, jobs=1): err= 0: pid=86883: Sun Jul 14 18:32:27 2024 00:16:20.177 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:20.177 slat (nsec): min=13685, max=64021, avg=17811.02, stdev=5493.71 00:16:20.177 clat (usec): min=164, max=1894, avg=218.71, stdev=46.75 00:16:20.177 lat (usec): min=178, max=1909, avg=236.52, stdev=47.14 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:16:20.177 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 221], 00:16:20.177 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:16:20.177 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 486], 99.95th=[ 498], 00:16:20.177 | 99.99th=[ 1893] 00:16:20.177 write: IOPS=2344, BW=9379KiB/s (9604kB/s)(9388KiB/1001msec); 0 zone resets 00:16:20.177 slat (usec): min=20, max=125, avg=27.22, stdev= 8.03 00:16:20.177 clat (usec): min=131, max=410, avg=188.38, stdev=28.48 00:16:20.177 lat (usec): min=156, max=450, avg=215.59, stdev=31.13 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:16:20.177 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:16:20.177 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 241], 00:16:20.177 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 359], 99.95th=[ 379], 00:16:20.177 | 99.99th=[ 412] 00:16:20.177 bw ( KiB/s): min= 9091, max= 9091, per=29.01%, avg=9091.00, stdev= 0.00, samples=1 00:16:20.177 iops : min= 2272, max= 2272, avg=2272.00, stdev= 0.00, samples=1 00:16:20.177 lat (usec) : 250=93.42%, 500=6.55% 00:16:20.177 lat (msec) : 2=0.02% 00:16:20.177 cpu : usr=1.90%, sys=7.30%, ctx=4395, majf=0, minf=5 00:16:20.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 issued rwts: total=2048,2347,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.177 job1: (groupid=0, jobs=1): err= 0: pid=86884: Sun Jul 14 18:32:27 2024 00:16:20.177 read: IOPS=1275, BW=5103KiB/s (5225kB/s)(5108KiB/1001msec) 00:16:20.177 slat (nsec): min=17297, max=83656, avg=25505.41, stdev=9454.53 00:16:20.177 clat (usec): min=167, max=1089, avg=366.82, stdev=56.83 00:16:20.177 lat (usec): min=193, max=1115, avg=392.33, stdev=55.87 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 215], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 334], 00:16:20.177 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 371], 00:16:20.177 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 449], 00:16:20.177 | 99.00th=[ 519], 99.50th=[ 562], 99.90th=[ 1012], 99.95th=[ 1090], 00:16:20.177 | 99.99th=[ 1090] 00:16:20.177 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:20.177 slat (usec): min=27, max=126, avg=41.22, stdev= 9.46 00:16:20.177 clat (usec): min=143, max=825, avg=277.95, stdev=61.10 00:16:20.177 lat (usec): min=204, max=860, avg=319.17, stdev=60.01 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 176], 5.00th=[ 198], 10.00th=[ 215], 20.00th=[ 233], 00:16:20.177 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:16:20.177 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 375], 95.00th=[ 404], 00:16:20.177 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 627], 99.95th=[ 824], 00:16:20.177 | 99.99th=[ 824] 00:16:20.177 bw ( KiB/s): min= 7434, max= 7434, per=23.73%, avg=7434.00, stdev= 0.00, samples=1 00:16:20.177 iops : min= 1858, max= 1858, avg=1858.00, stdev= 0.00, samples=1 00:16:20.177 lat (usec) : 250=19.48%, 500=79.95%, 750=0.46%, 1000=0.04% 00:16:20.177 lat (msec) : 2=0.07% 00:16:20.177 cpu : usr=1.90%, sys=7.00%, ctx=2813, majf=0, minf=10 00:16:20.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 issued rwts: total=1277,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.177 job2: (groupid=0, jobs=1): err= 0: pid=86885: Sun Jul 14 18:32:27 2024 00:16:20.177 read: IOPS=1278, BW=5115KiB/s (5238kB/s)(5120KiB/1001msec) 00:16:20.177 slat (nsec): min=17627, max=99685, avg=27848.93, stdev=8764.99 00:16:20.177 clat (usec): min=196, max=2000, avg=364.06, stdev=65.92 00:16:20.177 lat (usec): min=214, max=2019, avg=391.91, stdev=66.23 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 262], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 334], 00:16:20.177 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:16:20.177 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 433], 00:16:20.177 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 1172], 99.95th=[ 2008], 00:16:20.177 | 99.99th=[ 2008] 00:16:20.177 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:20.177 slat (usec): min=28, max=122, avg=40.49, stdev= 9.32 00:16:20.177 clat (usec): min=138, max=525, avg=278.12, stdev=50.49 00:16:20.177 lat (usec): min=172, max=571, avg=318.61, stdev=50.26 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 178], 5.00th=[ 208], 10.00th=[ 225], 20.00th=[ 241], 00:16:20.177 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:16:20.177 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 383], 00:16:20.177 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 482], 99.95th=[ 529], 00:16:20.177 | 99.99th=[ 529] 00:16:20.177 bw ( KiB/s): min= 7474, max= 7474, per=23.85%, avg=7474.00, stdev= 0.00, samples=1 00:16:20.177 iops : min= 1868, max= 1868, avg=1868.00, stdev= 0.00, samples=1 00:16:20.177 lat (usec) : 250=15.70%, 500=84.02%, 750=0.21% 00:16:20.177 lat (msec) : 2=0.04%, 4=0.04% 00:16:20.177 cpu : usr=2.30%, sys=6.80%, ctx=2816, majf=0, minf=7 00:16:20.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 issued rwts: total=1280,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.177 job3: (groupid=0, jobs=1): err= 0: pid=86886: Sun Jul 14 18:32:27 2024 00:16:20.177 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:20.177 slat (nsec): min=14503, max=72888, avg=18414.05, stdev=4988.15 00:16:20.177 clat (usec): min=170, max=705, avg=217.21, stdev=28.06 00:16:20.177 lat (usec): min=187, max=723, avg=235.63, stdev=28.51 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:16:20.177 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:16:20.177 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:16:20.177 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 347], 00:16:20.177 | 99.99th=[ 709] 00:16:20.177 write: IOPS=2419, BW=9678KiB/s (9911kB/s)(9688KiB/1001msec); 0 zone resets 00:16:20.177 slat (usec): min=20, max=126, avg=27.92, stdev= 8.07 00:16:20.177 clat (usec): min=129, max=550, avg=182.20, stdev=28.81 00:16:20.177 lat (usec): min=151, max=572, avg=210.12, stdev=30.69 00:16:20.177 clat percentiles (usec): 00:16:20.177 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 159], 00:16:20.177 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:16:20.177 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 235], 00:16:20.177 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 326], 00:16:20.177 | 99.99th=[ 553] 00:16:20.177 bw ( KiB/s): min= 9386, max= 9386, per=29.96%, avg=9386.00, stdev= 0.00, samples=1 00:16:20.177 iops : min= 2346, max= 2346, avg=2346.00, stdev= 0.00, samples=1 00:16:20.177 lat (usec) : 250=93.78%, 500=6.17%, 750=0.04% 00:16:20.177 cpu : usr=2.60%, sys=6.90%, ctx=4473, majf=0, minf=15 00:16:20.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.177 issued rwts: total=2048,2422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.177 00:16:20.177 Run status group 0 (all jobs): 00:16:20.177 READ: bw=26.0MiB/s (27.2MB/s), 5103KiB/s-8184KiB/s (5225kB/s-8380kB/s), io=26.0MiB (27.2MB), run=1001-1001msec 00:16:20.177 WRITE: bw=30.6MiB/s (32.1MB/s), 6138KiB/s-9678KiB/s (6285kB/s-9911kB/s), io=30.6MiB (32.1MB), run=1001-1001msec 00:16:20.177 00:16:20.177 Disk stats (read/write): 00:16:20.177 nvme0n1: ios=1735/2048, merge=0/0, ticks=398/416, in_queue=814, util=87.16% 00:16:20.177 nvme0n2: ios=1045/1392, merge=0/0, ticks=411/407, in_queue=818, util=87.58% 00:16:20.177 nvme0n3: ios=1024/1396, merge=0/0, ticks=382/405, in_queue=787, util=89.16% 00:16:20.177 nvme0n4: ios=1773/2048, merge=0/0, ticks=582/395, in_queue=977, util=92.55% 00:16:20.177 18:32:27 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:20.177 [global] 00:16:20.177 thread=1 00:16:20.177 invalidate=1 00:16:20.177 rw=randwrite 00:16:20.177 time_based=1 00:16:20.177 runtime=1 00:16:20.177 ioengine=libaio 00:16:20.177 direct=1 00:16:20.177 bs=4096 00:16:20.177 iodepth=1 00:16:20.177 norandommap=0 00:16:20.177 numjobs=1 00:16:20.177 00:16:20.177 verify_dump=1 00:16:20.177 verify_backlog=512 00:16:20.177 verify_state_save=0 00:16:20.177 do_verify=1 00:16:20.177 verify=crc32c-intel 00:16:20.177 [job0] 00:16:20.177 filename=/dev/nvme0n1 00:16:20.177 [job1] 00:16:20.177 filename=/dev/nvme0n2 00:16:20.177 [job2] 00:16:20.177 filename=/dev/nvme0n3 00:16:20.177 [job3] 00:16:20.177 filename=/dev/nvme0n4 00:16:20.177 Could not set queue depth (nvme0n1) 00:16:20.177 Could not set queue depth (nvme0n2) 00:16:20.177 Could not set queue depth (nvme0n3) 00:16:20.177 Could not set queue depth (nvme0n4) 00:16:20.177 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.177 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.177 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.177 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.177 fio-3.35 00:16:20.177 Starting 4 threads 00:16:21.550 00:16:21.550 job0: (groupid=0, jobs=1): err= 0: pid=86939: Sun Jul 14 18:32:28 2024 00:16:21.550 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:21.550 slat (nsec): min=14378, max=75270, avg=18640.63, stdev=5461.65 00:16:21.550 clat (usec): min=155, max=339, avg=211.16, stdev=24.86 00:16:21.550 lat (usec): min=170, max=358, avg=229.80, stdev=25.35 00:16:21.550 clat percentiles (usec): 00:16:21.550 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 00:16:21.550 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:16:21.550 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 255], 00:16:21.550 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 330], 00:16:21.550 | 99.99th=[ 338] 00:16:21.550 write: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(9.90MiB/1001msec); 0 zone resets 00:16:21.550 slat (usec): min=20, max=118, avg=28.26, stdev= 8.96 00:16:21.550 clat (usec): min=105, max=534, avg=176.80, stdev=27.51 00:16:21.550 lat (usec): min=128, max=560, avg=205.07, stdev=29.41 00:16:21.550 clat percentiles (usec): 00:16:21.550 | 1.00th=[ 121], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 157], 00:16:21.550 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 182], 00:16:21.550 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:16:21.550 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 375], 00:16:21.550 | 99.99th=[ 537] 00:16:21.550 bw ( KiB/s): min=10088, max=10088, per=38.07%, avg=10088.00, stdev= 0.00, samples=1 00:16:21.550 iops : min= 2522, max= 2522, avg=2522.00, stdev= 0.00, samples=1 00:16:21.550 lat (usec) : 250=96.27%, 500=3.71%, 750=0.02% 00:16:21.550 cpu : usr=1.40%, sys=8.40%, ctx=4582, majf=0, minf=17 00:16:21.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 issued rwts: total=2048,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.551 job1: (groupid=0, jobs=1): err= 0: pid=86940: Sun Jul 14 18:32:28 2024 00:16:21.551 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:21.551 slat (nsec): min=6134, max=61919, avg=16707.03, stdev=7391.95 00:16:21.551 clat (usec): min=238, max=2437, avg=490.54, stdev=111.93 00:16:21.551 lat (usec): min=266, max=2457, avg=507.24, stdev=111.89 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 318], 5.00th=[ 351], 10.00th=[ 375], 20.00th=[ 404], 00:16:21.551 | 30.00th=[ 433], 40.00th=[ 461], 50.00th=[ 486], 60.00th=[ 506], 00:16:21.551 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 644], 00:16:21.551 | 99.00th=[ 783], 99.50th=[ 832], 99.90th=[ 1037], 99.95th=[ 2442], 00:16:21.551 | 99.99th=[ 2442] 00:16:21.551 write: IOPS=1275, BW=5103KiB/s (5225kB/s)(5108KiB/1001msec); 0 zone resets 00:16:21.551 slat (usec): min=10, max=114, avg=28.61, stdev=12.41 00:16:21.551 clat (usec): min=138, max=2108, avg=343.49, stdev=81.63 00:16:21.551 lat (usec): min=160, max=2132, avg=372.10, stdev=81.93 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 208], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 289], 00:16:21.551 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 355], 00:16:21.551 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 457], 00:16:21.551 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 627], 99.95th=[ 2114], 00:16:21.551 | 99.99th=[ 2114] 00:16:21.551 bw ( KiB/s): min= 5432, max= 5432, per=20.50%, avg=5432.00, stdev= 0.00, samples=1 00:16:21.551 iops : min= 1358, max= 1358, avg=1358.00, stdev= 0.00, samples=1 00:16:21.551 lat (usec) : 250=2.61%, 500=77.49%, 750=19.21%, 1000=0.56% 00:16:21.551 lat (msec) : 2=0.04%, 4=0.09% 00:16:21.551 cpu : usr=1.30%, sys=3.90%, ctx=2380, majf=0, minf=7 00:16:21.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 issued rwts: total=1024,1277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.551 job2: (groupid=0, jobs=1): err= 0: pid=86942: Sun Jul 14 18:32:28 2024 00:16:21.551 read: IOPS=1103, BW=4416KiB/s (4522kB/s)(4420KiB/1001msec) 00:16:21.551 slat (nsec): min=7285, max=81552, avg=19829.26, stdev=8955.92 00:16:21.551 clat (usec): min=179, max=7296, avg=455.31, stdev=294.76 00:16:21.551 lat (usec): min=198, max=7313, avg=475.14, stdev=295.02 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 255], 00:16:21.551 | 30.00th=[ 273], 40.00th=[ 314], 50.00th=[ 445], 60.00th=[ 494], 00:16:21.551 | 70.00th=[ 545], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 791], 00:16:21.551 | 99.00th=[ 881], 99.50th=[ 955], 99.90th=[ 3294], 99.95th=[ 7308], 00:16:21.551 | 99.99th=[ 7308] 00:16:21.551 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:21.551 slat (usec): min=10, max=121, avg=26.32, stdev=10.54 00:16:21.551 clat (usec): min=129, max=745, avg=279.05, stdev=103.70 00:16:21.551 lat (usec): min=157, max=780, avg=305.37, stdev=102.47 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 182], 00:16:21.551 | 30.00th=[ 192], 40.00th=[ 212], 50.00th=[ 243], 60.00th=[ 310], 00:16:21.551 | 70.00th=[ 359], 80.00th=[ 388], 90.00th=[ 420], 95.00th=[ 449], 00:16:21.551 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 709], 99.95th=[ 742], 00:16:21.551 | 99.99th=[ 742] 00:16:21.551 bw ( KiB/s): min= 8192, max= 8192, per=30.92%, avg=8192.00, stdev= 0.00, samples=1 00:16:21.551 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:21.551 lat (usec) : 250=37.22%, 500=45.97%, 750=13.37%, 1000=3.33% 00:16:21.551 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04% 00:16:21.551 cpu : usr=1.40%, sys=4.80%, ctx=2686, majf=0, minf=11 00:16:21.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 issued rwts: total=1105,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.551 job3: (groupid=0, jobs=1): err= 0: pid=86943: Sun Jul 14 18:32:28 2024 00:16:21.551 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:21.551 slat (nsec): min=6359, max=78586, avg=15717.76, stdev=8968.83 00:16:21.551 clat (usec): min=302, max=2358, avg=490.11, stdev=108.17 00:16:21.551 lat (usec): min=320, max=2375, avg=505.83, stdev=108.83 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 375], 20.00th=[ 408], 00:16:21.551 | 30.00th=[ 433], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 506], 00:16:21.551 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 644], 00:16:21.551 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 1090], 99.95th=[ 2343], 00:16:21.551 | 99.99th=[ 2343] 00:16:21.551 write: IOPS=1282, BW=5131KiB/s (5254kB/s)(5136KiB/1001msec); 0 zone resets 00:16:21.551 slat (usec): min=10, max=140, avg=27.93, stdev=11.45 00:16:21.551 clat (usec): min=147, max=2200, avg=343.64, stdev=82.23 00:16:21.551 lat (usec): min=188, max=2225, avg=371.57, stdev=82.21 00:16:21.551 clat percentiles (usec): 00:16:21.551 | 1.00th=[ 217], 5.00th=[ 251], 10.00th=[ 273], 20.00th=[ 293], 00:16:21.551 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 351], 00:16:21.551 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 457], 00:16:21.551 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 2212], 00:16:21.551 | 99.99th=[ 2212] 00:16:21.551 bw ( KiB/s): min= 5456, max= 5456, per=20.59%, avg=5456.00, stdev= 0.00, samples=1 00:16:21.551 iops : min= 1364, max= 1364, avg=1364.00, stdev= 0.00, samples=1 00:16:21.551 lat (usec) : 250=2.60%, 500=76.99%, 750=19.97%, 1000=0.30% 00:16:21.551 lat (msec) : 2=0.04%, 4=0.09% 00:16:21.551 cpu : usr=0.70%, sys=4.30%, ctx=2391, majf=0, minf=10 00:16:21.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.551 issued rwts: total=1024,1284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.551 00:16:21.551 Run status group 0 (all jobs): 00:16:21.551 READ: bw=20.3MiB/s (21.3MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=20.3MiB (21.3MB), run=1001-1001msec 00:16:21.551 WRITE: bw=25.9MiB/s (27.1MB/s), 5103KiB/s-9.89MiB/s (5225kB/s-10.4MB/s), io=25.9MiB (27.2MB), run=1001-1001msec 00:16:21.551 00:16:21.551 Disk stats (read/write): 00:16:21.551 nvme0n1: ios=1968/2048, merge=0/0, ticks=449/383, in_queue=832, util=87.98% 00:16:21.551 nvme0n2: ios=996/1024, merge=0/0, ticks=486/348, in_queue=834, util=88.63% 00:16:21.551 nvme0n3: ios=1024/1342, merge=0/0, ticks=435/376, in_queue=811, util=88.65% 00:16:21.551 nvme0n4: ios=967/1024, merge=0/0, ticks=463/351, in_queue=814, util=89.73% 00:16:21.551 18:32:28 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:21.551 [global] 00:16:21.551 thread=1 00:16:21.551 invalidate=1 00:16:21.551 rw=write 00:16:21.551 time_based=1 00:16:21.551 runtime=1 00:16:21.551 ioengine=libaio 00:16:21.551 direct=1 00:16:21.551 bs=4096 00:16:21.551 iodepth=128 00:16:21.551 norandommap=0 00:16:21.551 numjobs=1 00:16:21.551 00:16:21.551 verify_dump=1 00:16:21.551 verify_backlog=512 00:16:21.551 verify_state_save=0 00:16:21.551 do_verify=1 00:16:21.551 verify=crc32c-intel 00:16:21.551 [job0] 00:16:21.551 filename=/dev/nvme0n1 00:16:21.551 [job1] 00:16:21.551 filename=/dev/nvme0n2 00:16:21.551 [job2] 00:16:21.551 filename=/dev/nvme0n3 00:16:21.551 [job3] 00:16:21.551 filename=/dev/nvme0n4 00:16:21.551 Could not set queue depth (nvme0n1) 00:16:21.551 Could not set queue depth (nvme0n2) 00:16:21.551 Could not set queue depth (nvme0n3) 00:16:21.551 Could not set queue depth (nvme0n4) 00:16:21.551 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.551 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.551 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.551 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.551 fio-3.35 00:16:21.551 Starting 4 threads 00:16:22.929 00:16:22.929 job0: (groupid=0, jobs=1): err= 0: pid=86997: Sun Jul 14 18:32:30 2024 00:16:22.929 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:16:22.929 slat (usec): min=6, max=15987, avg=350.81, stdev=1686.20 00:16:22.929 clat (usec): min=27059, max=67402, avg=43862.06, stdev=8441.53 00:16:22.929 lat (usec): min=28367, max=67415, avg=44212.87, stdev=8372.69 00:16:22.929 clat percentiles (usec): 00:16:22.929 | 1.00th=[29754], 5.00th=[32900], 10.00th=[35914], 20.00th=[38011], 00:16:22.929 | 30.00th=[39584], 40.00th=[40109], 50.00th=[41681], 60.00th=[42730], 00:16:22.929 | 70.00th=[45351], 80.00th=[49021], 90.00th=[57934], 95.00th=[63177], 00:16:22.929 | 99.00th=[66323], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:16:22.929 | 99.99th=[67634] 00:16:22.929 write: IOPS=1650, BW=6603KiB/s (6762kB/s)(6656KiB/1008msec); 0 zone resets 00:16:22.929 slat (usec): min=11, max=13309, avg=267.98, stdev=1359.40 00:16:22.929 clat (usec): min=7030, max=63190, avg=35327.08, stdev=7595.83 00:16:22.929 lat (usec): min=9105, max=63215, avg=35595.06, stdev=7517.20 00:16:22.929 clat percentiles (usec): 00:16:22.929 | 1.00th=[11338], 5.00th=[25560], 10.00th=[28181], 20.00th=[30802], 00:16:22.929 | 30.00th=[32113], 40.00th=[32900], 50.00th=[33817], 60.00th=[35390], 00:16:22.929 | 70.00th=[39060], 80.00th=[40109], 90.00th=[42730], 95.00th=[49546], 00:16:22.929 | 99.00th=[58459], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:16:22.929 | 99.99th=[63177] 00:16:22.929 bw ( KiB/s): min= 4160, max= 8192, per=13.49%, avg=6176.00, stdev=2851.05, samples=2 00:16:22.929 iops : min= 1040, max= 2048, avg=1544.00, stdev=712.76, samples=2 00:16:22.929 lat (msec) : 10=0.28%, 20=1.25%, 50=87.22%, 100=11.25% 00:16:22.929 cpu : usr=2.09%, sys=4.77%, ctx=147, majf=0, minf=15 00:16:22.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:16:22.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.929 issued rwts: total=1536,1664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.929 job1: (groupid=0, jobs=1): err= 0: pid=86998: Sun Jul 14 18:32:30 2024 00:16:22.929 read: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1005msec) 00:16:22.929 slat (usec): min=8, max=7822, avg=120.46, stdev=733.32 00:16:22.929 clat (usec): min=574, max=24937, avg=15373.93, stdev=1937.87 00:16:22.929 lat (usec): min=7236, max=24979, avg=15494.39, stdev=2021.68 00:16:22.929 clat percentiles (usec): 00:16:22.929 | 1.00th=[ 8094], 5.00th=[11994], 10.00th=[13566], 20.00th=[14484], 00:16:22.929 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15270], 60.00th=[15926], 00:16:22.929 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17433], 95.00th=[18220], 00:16:22.929 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23987], 99.95th=[24249], 00:16:22.929 | 99.99th=[25035] 00:16:22.929 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:16:22.929 slat (usec): min=11, max=8483, avg=117.99, stdev=702.21 00:16:22.929 clat (usec): min=9144, max=25531, avg=15895.64, stdev=1850.06 00:16:22.929 lat (usec): min=9171, max=25567, avg=16013.62, stdev=1875.15 00:16:22.929 clat percentiles (usec): 00:16:22.929 | 1.00th=[ 9765], 5.00th=[12649], 10.00th=[14484], 20.00th=[15008], 00:16:22.929 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:16:22.929 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:16:22.929 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22938], 99.95th=[23987], 00:16:22.929 | 99.99th=[25560] 00:16:22.929 bw ( KiB/s): min=16384, max=16416, per=35.82%, avg=16400.00, stdev=22.63, samples=2 00:16:22.929 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:16:22.929 lat (usec) : 750=0.01% 00:16:22.929 lat (msec) : 10=1.56%, 20=96.56%, 50=1.86% 00:16:22.929 cpu : usr=3.98%, sys=12.35%, ctx=257, majf=0, minf=6 00:16:22.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:22.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.929 issued rwts: total=4022,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.929 job2: (groupid=0, jobs=1): err= 0: pid=86999: Sun Jul 14 18:32:30 2024 00:16:22.929 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:16:22.929 slat (usec): min=7, max=12322, avg=243.17, stdev=1211.05 00:16:22.929 clat (usec): min=22107, max=45708, avg=31683.58, stdev=4011.95 00:16:22.929 lat (usec): min=22131, max=45774, avg=31926.75, stdev=4100.27 00:16:22.929 clat percentiles (usec): 00:16:22.929 | 1.00th=[22676], 5.00th=[25560], 10.00th=[27395], 20.00th=[28705], 00:16:22.929 | 30.00th=[29492], 40.00th=[30540], 50.00th=[30802], 60.00th=[31851], 00:16:22.929 | 70.00th=[33817], 80.00th=[35914], 90.00th=[37487], 95.00th=[38011], 00:16:22.929 | 99.00th=[40109], 99.50th=[42206], 99.90th=[44827], 99.95th=[45351], 00:16:22.929 | 99.99th=[45876] 00:16:22.929 write: IOPS=2176, BW=8706KiB/s (8915kB/s)(8776KiB/1008msec); 0 zone resets 00:16:22.929 slat (usec): min=19, max=10654, avg=219.73, stdev=1180.34 00:16:22.929 clat (usec): min=6099, max=44668, avg=28096.05, stdev=4753.70 00:16:22.929 lat (usec): min=8121, max=44727, avg=28315.78, stdev=4851.22 00:16:22.929 clat percentiles (usec): 00:16:22.930 | 1.00th=[12649], 5.00th=[22152], 10.00th=[22938], 20.00th=[23462], 00:16:22.930 | 30.00th=[25560], 40.00th=[26608], 50.00th=[28705], 60.00th=[30016], 00:16:22.930 | 70.00th=[30278], 80.00th=[31851], 90.00th=[34341], 95.00th=[35390], 00:16:22.930 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:16:22.930 | 99.99th=[44827] 00:16:22.930 bw ( KiB/s): min= 8208, max= 8336, per=18.07%, avg=8272.00, stdev=90.51, samples=2 00:16:22.930 iops : min= 2052, max= 2084, avg=2068.00, stdev=22.63, samples=2 00:16:22.930 lat (msec) : 10=0.21%, 20=1.06%, 50=98.73% 00:16:22.930 cpu : usr=2.38%, sys=7.45%, ctx=173, majf=0, minf=11 00:16:22.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:22.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.930 issued rwts: total=2048,2194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.930 job3: (groupid=0, jobs=1): err= 0: pid=87001: Sun Jul 14 18:32:30 2024 00:16:22.930 read: IOPS=3151, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1002msec) 00:16:22.930 slat (usec): min=6, max=5053, avg=140.72, stdev=652.73 00:16:22.930 clat (usec): min=567, max=24728, avg=18274.51, stdev=2262.83 00:16:22.930 lat (usec): min=4705, max=24996, avg=18415.23, stdev=2185.11 00:16:22.930 clat percentiles (usec): 00:16:22.930 | 1.00th=[ 5538], 5.00th=[15008], 10.00th=[16188], 20.00th=[17171], 00:16:22.930 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:16:22.930 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20579], 95.00th=[21365], 00:16:22.930 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24511], 99.95th=[24773], 00:16:22.930 | 99.99th=[24773] 00:16:22.930 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:16:22.930 slat (usec): min=12, max=5547, avg=146.19, stdev=652.54 00:16:22.930 clat (usec): min=13975, max=24500, avg=19088.54, stdev=2298.86 00:16:22.930 lat (usec): min=14004, max=24537, avg=19234.73, stdev=2283.61 00:16:22.930 clat percentiles (usec): 00:16:22.930 | 1.00th=[14615], 5.00th=[15270], 10.00th=[16057], 20.00th=[16909], 00:16:22.930 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:16:22.930 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22414], 95.00th=[22676], 00:16:22.930 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:16:22.930 | 99.99th=[24511] 00:16:22.930 bw ( KiB/s): min=13960, max=14404, per=30.97%, avg=14182.00, stdev=313.96, samples=2 00:16:22.930 iops : min= 3490, max= 3601, avg=3545.50, stdev=78.49, samples=2 00:16:22.930 lat (usec) : 750=0.01% 00:16:22.930 lat (msec) : 10=0.47%, 20=72.52%, 50=26.99% 00:16:22.930 cpu : usr=3.80%, sys=11.59%, ctx=460, majf=0, minf=3 00:16:22.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:22.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.930 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.930 00:16:22.930 Run status group 0 (all jobs): 00:16:22.930 READ: bw=41.7MiB/s (43.7MB/s), 6095KiB/s-15.6MiB/s (6242kB/s-16.4MB/s), io=42.0MiB (44.1MB), run=1002-1008msec 00:16:22.930 WRITE: bw=44.7MiB/s (46.9MB/s), 6603KiB/s-15.9MiB/s (6762kB/s-16.7MB/s), io=45.1MiB (47.3MB), run=1002-1008msec 00:16:22.930 00:16:22.930 Disk stats (read/write): 00:16:22.930 nvme0n1: ios=1212/1536, merge=0/0, ticks=13430/12721, in_queue=26151, util=87.56% 00:16:22.930 nvme0n2: ios=3430/3584, merge=0/0, ticks=24033/24629, in_queue=48662, util=88.35% 00:16:22.930 nvme0n3: ios=1573/2048, merge=0/0, ticks=16213/17179, in_queue=33392, util=89.11% 00:16:22.930 nvme0n4: ios=2740/3072, merge=0/0, ticks=12053/13062, in_queue=25115, util=89.77% 00:16:22.930 18:32:30 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:22.930 [global] 00:16:22.930 thread=1 00:16:22.930 invalidate=1 00:16:22.930 rw=randwrite 00:16:22.930 time_based=1 00:16:22.930 runtime=1 00:16:22.930 ioengine=libaio 00:16:22.930 direct=1 00:16:22.930 bs=4096 00:16:22.930 iodepth=128 00:16:22.930 norandommap=0 00:16:22.930 numjobs=1 00:16:22.930 00:16:22.930 verify_dump=1 00:16:22.930 verify_backlog=512 00:16:22.930 verify_state_save=0 00:16:22.930 do_verify=1 00:16:22.930 verify=crc32c-intel 00:16:22.930 [job0] 00:16:22.930 filename=/dev/nvme0n1 00:16:22.930 [job1] 00:16:22.930 filename=/dev/nvme0n2 00:16:22.930 [job2] 00:16:22.930 filename=/dev/nvme0n3 00:16:22.930 [job3] 00:16:22.930 filename=/dev/nvme0n4 00:16:22.930 Could not set queue depth (nvme0n1) 00:16:22.930 Could not set queue depth (nvme0n2) 00:16:22.930 Could not set queue depth (nvme0n3) 00:16:22.930 Could not set queue depth (nvme0n4) 00:16:22.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.930 fio-3.35 00:16:22.930 Starting 4 threads 00:16:24.309 00:16:24.309 job0: (groupid=0, jobs=1): err= 0: pid=87064: Sun Jul 14 18:32:31 2024 00:16:24.309 read: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1006msec) 00:16:24.309 slat (usec): min=8, max=7972, avg=121.85, stdev=761.27 00:16:24.309 clat (usec): min=3193, max=27041, avg=15926.77, stdev=2056.87 00:16:24.309 lat (usec): min=5539, max=27123, avg=16048.62, stdev=2162.31 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[ 6456], 5.00th=[13960], 10.00th=[14353], 20.00th=[14615], 00:16:24.309 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15795], 60.00th=[16319], 00:16:24.309 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18482], 00:16:24.309 | 99.00th=[22414], 99.50th=[23725], 99.90th=[24511], 99.95th=[24773], 00:16:24.309 | 99.99th=[27132] 00:16:24.309 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:16:24.309 slat (usec): min=10, max=8023, avg=119.74, stdev=706.14 00:16:24.309 clat (usec): min=8823, max=24768, avg=15820.23, stdev=1804.58 00:16:24.309 lat (usec): min=8840, max=25149, avg=15939.97, stdev=1846.65 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[ 9765], 5.00th=[12518], 10.00th=[14091], 20.00th=[15008], 00:16:24.309 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:16:24.309 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[18220], 00:16:24.309 | 99.00th=[21890], 99.50th=[23200], 99.90th=[24511], 99.95th=[24511], 00:16:24.309 | 99.99th=[24773] 00:16:24.309 bw ( KiB/s): min=16384, max=16384, per=35.02%, avg=16384.00, stdev= 0.00, samples=2 00:16:24.309 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:24.309 lat (msec) : 4=0.01%, 10=1.30%, 20=96.30%, 50=2.39% 00:16:24.309 cpu : usr=3.38%, sys=12.54%, ctx=239, majf=0, minf=12 00:16:24.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:24.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.309 issued rwts: total=3911,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.309 job1: (groupid=0, jobs=1): err= 0: pid=87065: Sun Jul 14 18:32:31 2024 00:16:24.309 read: IOPS=1910, BW=7642KiB/s (7826kB/s)(7688KiB/1006msec) 00:16:24.309 slat (usec): min=4, max=20271, avg=260.86, stdev=1427.18 00:16:24.309 clat (usec): min=2338, max=47781, avg=31508.66, stdev=5671.50 00:16:24.309 lat (usec): min=8674, max=47819, avg=31769.51, stdev=5769.90 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[ 9110], 5.00th=[21890], 10.00th=[25822], 20.00th=[27657], 00:16:24.309 | 30.00th=[30540], 40.00th=[31327], 50.00th=[32113], 60.00th=[33424], 00:16:24.309 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38011], 95.00th=[38536], 00:16:24.309 | 99.00th=[43254], 99.50th=[45351], 99.90th=[47449], 99.95th=[47973], 00:16:24.309 | 99.99th=[47973] 00:16:24.309 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:16:24.309 slat (usec): min=5, max=15472, avg=236.73, stdev=1620.71 00:16:24.309 clat (usec): min=18254, max=49187, avg=32252.73, stdev=3264.51 00:16:24.309 lat (usec): min=18300, max=49246, avg=32489.45, stdev=3594.97 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[23462], 5.00th=[26084], 10.00th=[29492], 20.00th=[31065], 00:16:24.309 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32900], 00:16:24.309 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[36439], 00:16:24.309 | 99.00th=[45351], 99.50th=[46400], 99.90th=[47973], 99.95th=[48497], 00:16:24.309 | 99.99th=[49021] 00:16:24.309 bw ( KiB/s): min= 8175, max= 8192, per=17.49%, avg=8183.50, stdev=12.02, samples=2 00:16:24.309 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:16:24.309 lat (msec) : 4=0.03%, 10=1.16%, 20=0.60%, 50=98.21% 00:16:24.309 cpu : usr=1.79%, sys=6.07%, ctx=301, majf=0, minf=9 00:16:24.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:24.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.309 issued rwts: total=1922,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.309 job2: (groupid=0, jobs=1): err= 0: pid=87066: Sun Jul 14 18:32:31 2024 00:16:24.309 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:16:24.309 slat (usec): min=6, max=16175, avg=143.15, stdev=988.01 00:16:24.309 clat (usec): min=4718, max=35160, avg=18702.44, stdev=4349.90 00:16:24.309 lat (usec): min=4732, max=35176, avg=18845.59, stdev=4399.08 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[11076], 5.00th=[13173], 10.00th=[14484], 20.00th=[15795], 00:16:24.309 | 30.00th=[16188], 40.00th=[17171], 50.00th=[18220], 60.00th=[19006], 00:16:24.309 | 70.00th=[19792], 80.00th=[21627], 90.00th=[24249], 95.00th=[27657], 00:16:24.309 | 99.00th=[32113], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:16:24.309 | 99.99th=[35390] 00:16:24.309 write: IOPS=3568, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:16:24.309 slat (usec): min=5, max=16570, avg=126.92, stdev=922.61 00:16:24.309 clat (usec): min=3722, max=35168, avg=16817.04, stdev=3101.61 00:16:24.309 lat (usec): min=3755, max=35221, avg=16943.96, stdev=3233.90 00:16:24.309 clat percentiles (usec): 00:16:24.309 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[12780], 20.00th=[16319], 00:16:24.309 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:16:24.309 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[19792], 00:16:24.309 | 99.00th=[20317], 99.50th=[30278], 99.90th=[33817], 99.95th=[35390], 00:16:24.309 | 99.99th=[35390] 00:16:24.309 bw ( KiB/s): min=12944, max=15759, per=30.67%, avg=14351.50, stdev=1990.51, samples=2 00:16:24.309 iops : min= 3236, max= 3939, avg=3587.50, stdev=497.10, samples=2 00:16:24.309 lat (msec) : 4=0.08%, 10=2.98%, 20=81.26%, 50=15.68% 00:16:24.309 cpu : usr=3.39%, sys=10.66%, ctx=365, majf=0, minf=13 00:16:24.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:24.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.309 issued rwts: total=3584,3586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.309 job3: (groupid=0, jobs=1): err= 0: pid=87067: Sun Jul 14 18:32:31 2024 00:16:24.310 read: IOPS=1842, BW=7368KiB/s (7545kB/s)(7420KiB/1007msec) 00:16:24.310 slat (usec): min=4, max=15341, avg=258.63, stdev=1315.90 00:16:24.310 clat (usec): min=2083, max=54785, avg=32556.39, stdev=5405.09 00:16:24.310 lat (usec): min=7878, max=55794, avg=32815.01, stdev=5473.18 00:16:24.310 clat percentiles (usec): 00:16:24.310 | 1.00th=[15795], 5.00th=[25822], 10.00th=[27919], 20.00th=[30540], 00:16:24.310 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32637], 60.00th=[33424], 00:16:24.310 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39060], 95.00th=[41157], 00:16:24.310 | 99.00th=[46400], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:16:24.310 | 99.99th=[54789] 00:16:24.310 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:16:24.310 slat (usec): min=4, max=16036, avg=247.91, stdev=1650.87 00:16:24.310 clat (usec): min=18798, max=49446, avg=32286.01, stdev=3554.14 00:16:24.310 lat (usec): min=18984, max=51145, avg=32533.92, stdev=3811.99 00:16:24.310 clat percentiles (usec): 00:16:24.310 | 1.00th=[20317], 5.00th=[25297], 10.00th=[30016], 20.00th=[30802], 00:16:24.310 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32637], 60.00th=[33162], 00:16:24.310 | 70.00th=[33424], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:16:24.310 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47973], 99.95th=[48497], 00:16:24.310 | 99.99th=[49546] 00:16:24.310 bw ( KiB/s): min= 8192, max= 8208, per=17.53%, avg=8200.00, stdev=11.31, samples=2 00:16:24.310 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:16:24.310 lat (msec) : 4=0.03%, 10=0.44%, 20=1.56%, 50=97.69%, 100=0.28% 00:16:24.310 cpu : usr=2.19%, sys=5.96%, ctx=297, majf=0, minf=15 00:16:24.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:24.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.310 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.310 00:16:24.310 Run status group 0 (all jobs): 00:16:24.310 READ: bw=43.7MiB/s (45.8MB/s), 7368KiB/s-15.2MiB/s (7545kB/s-15.9MB/s), io=44.0MiB (46.2MB), run=1005-1007msec 00:16:24.310 WRITE: bw=45.7MiB/s (47.9MB/s), 8135KiB/s-15.9MiB/s (8330kB/s-16.7MB/s), io=46.0MiB (48.2MB), run=1005-1007msec 00:16:24.310 00:16:24.310 Disk stats (read/write): 00:16:24.310 nvme0n1: ios=3340/3584, merge=0/0, ticks=24032/24727, in_queue=48759, util=89.09% 00:16:24.310 nvme0n2: ios=1585/1828, merge=0/0, ticks=24066/26402, in_queue=50468, util=88.28% 00:16:24.310 nvme0n3: ios=2996/3072, merge=0/0, ticks=53525/49469, in_queue=102994, util=89.70% 00:16:24.310 nvme0n4: ios=1536/1759, merge=0/0, ticks=24189/26337, in_queue=50526, util=89.02% 00:16:24.310 18:32:31 -- target/fio.sh@55 -- # sync 00:16:24.310 18:32:31 -- target/fio.sh@59 -- # fio_pid=87083 00:16:24.310 18:32:31 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:24.310 18:32:31 -- target/fio.sh@61 -- # sleep 3 00:16:24.310 [global] 00:16:24.310 thread=1 00:16:24.310 invalidate=1 00:16:24.310 rw=read 00:16:24.310 time_based=1 00:16:24.310 runtime=10 00:16:24.310 ioengine=libaio 00:16:24.310 direct=1 00:16:24.310 bs=4096 00:16:24.310 iodepth=1 00:16:24.310 norandommap=1 00:16:24.310 numjobs=1 00:16:24.310 00:16:24.310 [job0] 00:16:24.310 filename=/dev/nvme0n1 00:16:24.310 [job1] 00:16:24.310 filename=/dev/nvme0n2 00:16:24.310 [job2] 00:16:24.310 filename=/dev/nvme0n3 00:16:24.310 [job3] 00:16:24.310 filename=/dev/nvme0n4 00:16:24.310 Could not set queue depth (nvme0n1) 00:16:24.310 Could not set queue depth (nvme0n2) 00:16:24.310 Could not set queue depth (nvme0n3) 00:16:24.310 Could not set queue depth (nvme0n4) 00:16:24.310 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.310 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.310 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.310 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.310 fio-3.35 00:16:24.310 Starting 4 threads 00:16:27.601 18:32:34 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:27.601 fio: pid=87126, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.601 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45342720, buflen=4096 00:16:27.601 18:32:34 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:27.859 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35119104, buflen=4096 00:16:27.859 fio: pid=87125, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.859 18:32:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.859 18:32:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:28.117 fio: pid=87123, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:28.117 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=60612608, buflen=4096 00:16:28.117 18:32:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.117 18:32:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:28.377 fio: pid=87124, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:28.377 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=41373696, buflen=4096 00:16:28.377 00:16:28.377 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87123: Sun Jul 14 18:32:35 2024 00:16:28.377 read: IOPS=4263, BW=16.7MiB/s (17.5MB/s)(57.8MiB/3471msec) 00:16:28.377 slat (usec): min=7, max=12772, avg=20.28, stdev=170.08 00:16:28.377 clat (usec): min=75, max=3867, avg=212.61, stdev=68.37 00:16:28.377 lat (usec): min=141, max=13085, avg=232.90, stdev=183.79 00:16:28.377 clat percentiles (usec): 00:16:28.377 | 1.00th=[ 145], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 188], 00:16:28.377 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:16:28.377 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 277], 00:16:28.377 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 611], 99.95th=[ 1729], 00:16:28.377 | 99.99th=[ 3458] 00:16:28.377 bw ( KiB/s): min=17032, max=17924, per=36.76%, avg=17559.33, stdev=326.34, samples=6 00:16:28.377 iops : min= 4258, max= 4481, avg=4389.83, stdev=81.59, samples=6 00:16:28.377 lat (usec) : 100=0.01%, 250=90.29%, 500=9.57%, 750=0.03%, 1000=0.02% 00:16:28.377 lat (msec) : 2=0.04%, 4=0.03% 00:16:28.377 cpu : usr=1.04%, sys=6.14%, ctx=14823, majf=0, minf=1 00:16:28.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 issued rwts: total=14799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.377 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87124: Sun Jul 14 18:32:35 2024 00:16:28.377 read: IOPS=2708, BW=10.6MiB/s (11.1MB/s)(39.5MiB/3730msec) 00:16:28.377 slat (usec): min=7, max=11416, avg=21.66, stdev=213.49 00:16:28.377 clat (usec): min=132, max=3729, avg=345.77, stdev=117.01 00:16:28.377 lat (usec): min=146, max=11634, avg=367.43, stdev=242.81 00:16:28.377 clat percentiles (usec): 00:16:28.377 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 235], 00:16:28.377 | 30.00th=[ 302], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 388], 00:16:28.377 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 482], 00:16:28.377 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 832], 99.95th=[ 1221], 00:16:28.377 | 99.99th=[ 3064] 00:16:28.377 bw ( KiB/s): min= 9240, max=15076, per=21.67%, avg=10351.00, stdev=2092.45, samples=7 00:16:28.377 iops : min= 2310, max= 3769, avg=2587.71, stdev=523.13, samples=7 00:16:28.377 lat (usec) : 250=22.43%, 500=73.69%, 750=3.74%, 1000=0.07% 00:16:28.377 lat (msec) : 2=0.03%, 4=0.03% 00:16:28.377 cpu : usr=0.67%, sys=4.16%, ctx=10136, majf=0, minf=1 00:16:28.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 issued rwts: total=10102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.377 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87125: Sun Jul 14 18:32:35 2024 00:16:28.377 read: IOPS=2670, BW=10.4MiB/s (10.9MB/s)(33.5MiB/3211msec) 00:16:28.377 slat (usec): min=7, max=7396, avg=18.04, stdev=108.23 00:16:28.377 clat (usec): min=53, max=3153, avg=354.55, stdev=88.58 00:16:28.377 lat (usec): min=185, max=7715, avg=372.59, stdev=139.17 00:16:28.377 clat percentiles (usec): 00:16:28.377 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 273], 00:16:28.377 | 30.00th=[ 302], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 383], 00:16:28.377 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 482], 00:16:28.377 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 701], 99.95th=[ 889], 00:16:28.377 | 99.99th=[ 3163] 00:16:28.377 bw ( KiB/s): min= 9453, max=13704, per=22.17%, avg=10590.17, stdev=1658.13, samples=6 00:16:28.377 iops : min= 2363, max= 3426, avg=2647.50, stdev=414.57, samples=6 00:16:28.377 lat (usec) : 100=0.01%, 250=11.92%, 500=84.29%, 750=3.70%, 1000=0.03% 00:16:28.377 lat (msec) : 2=0.02%, 4=0.01% 00:16:28.377 cpu : usr=1.06%, sys=3.68%, ctx=8610, majf=0, minf=1 00:16:28.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 issued rwts: total=8575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.377 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87126: Sun Jul 14 18:32:35 2024 00:16:28.377 read: IOPS=3783, BW=14.8MiB/s (15.5MB/s)(43.2MiB/2926msec) 00:16:28.377 slat (usec): min=12, max=108, avg=20.27, stdev= 9.08 00:16:28.377 clat (usec): min=154, max=3601, avg=241.97, stdev=85.85 00:16:28.377 lat (usec): min=167, max=3635, avg=262.24, stdev=90.30 00:16:28.377 clat percentiles (usec): 00:16:28.377 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:16:28.377 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:16:28.377 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 363], 95.00th=[ 408], 00:16:28.377 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 594], 99.95th=[ 865], 00:16:28.377 | 99.99th=[ 2573] 00:16:28.377 bw ( KiB/s): min= 9320, max=17144, per=31.07%, avg=14841.60, stdev=3274.18, samples=5 00:16:28.377 iops : min= 2330, max= 4286, avg=3710.40, stdev=818.54, samples=5 00:16:28.377 lat (usec) : 250=73.64%, 500=25.95%, 750=0.33%, 1000=0.02% 00:16:28.377 lat (msec) : 2=0.01%, 4=0.04% 00:16:28.377 cpu : usr=1.44%, sys=6.15%, ctx=11073, majf=0, minf=1 00:16:28.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.377 issued rwts: total=11071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.377 00:16:28.377 Run status group 0 (all jobs): 00:16:28.377 READ: bw=46.6MiB/s (48.9MB/s), 10.4MiB/s-16.7MiB/s (10.9MB/s-17.5MB/s), io=174MiB (182MB), run=2926-3730msec 00:16:28.377 00:16:28.377 Disk stats (read/write): 00:16:28.377 nvme0n1: ios=14389/0, merge=0/0, ticks=3110/0, in_queue=3110, util=95.05% 00:16:28.377 nvme0n2: ios=9478/0, merge=0/0, ticks=3336/0, in_queue=3336, util=95.48% 00:16:28.377 nvme0n3: ios=8290/0, merge=0/0, ticks=2917/0, in_queue=2917, util=96.43% 00:16:28.377 nvme0n4: ios=10862/0, merge=0/0, ticks=2690/0, in_queue=2690, util=96.79% 00:16:28.377 18:32:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.377 18:32:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:28.637 18:32:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.637 18:32:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:28.896 18:32:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.896 18:32:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:29.154 18:32:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:29.154 18:32:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:29.413 18:32:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:29.413 18:32:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:29.672 18:32:36 -- target/fio.sh@69 -- # fio_status=0 00:16:29.672 18:32:36 -- target/fio.sh@70 -- # wait 87083 00:16:29.672 18:32:36 -- target/fio.sh@70 -- # fio_status=4 00:16:29.672 18:32:36 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.672 18:32:37 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.672 18:32:37 -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.672 18:32:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:29.672 18:32:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.672 18:32:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.672 18:32:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:29.672 18:32:37 -- common/autotest_common.sh@1210 -- # return 0 00:16:29.672 nvmf hotplug test: fio failed as expected 00:16:29.672 18:32:37 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:29.672 18:32:37 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:29.672 18:32:37 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.931 18:32:37 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:29.931 18:32:37 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:29.931 18:32:37 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:29.931 18:32:37 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:29.931 18:32:37 -- target/fio.sh@91 -- # nvmftestfini 00:16:29.931 18:32:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:29.931 18:32:37 -- nvmf/common.sh@116 -- # sync 00:16:29.931 18:32:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:29.931 18:32:37 -- nvmf/common.sh@119 -- # set +e 00:16:29.931 18:32:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:29.931 18:32:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:29.931 rmmod nvme_tcp 00:16:30.190 rmmod nvme_fabrics 00:16:30.190 rmmod nvme_keyring 00:16:30.190 18:32:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.190 18:32:37 -- nvmf/common.sh@123 -- # set -e 00:16:30.190 18:32:37 -- nvmf/common.sh@124 -- # return 0 00:16:30.190 18:32:37 -- nvmf/common.sh@477 -- # '[' -n 86588 ']' 00:16:30.190 18:32:37 -- nvmf/common.sh@478 -- # killprocess 86588 00:16:30.190 18:32:37 -- common/autotest_common.sh@926 -- # '[' -z 86588 ']' 00:16:30.190 18:32:37 -- common/autotest_common.sh@930 -- # kill -0 86588 00:16:30.190 18:32:37 -- common/autotest_common.sh@931 -- # uname 00:16:30.190 18:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.190 18:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86588 00:16:30.190 killing process with pid 86588 00:16:30.190 18:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.190 18:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.190 18:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86588' 00:16:30.190 18:32:37 -- common/autotest_common.sh@945 -- # kill 86588 00:16:30.190 18:32:37 -- common/autotest_common.sh@950 -- # wait 86588 00:16:30.449 18:32:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:30.449 18:32:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:30.449 18:32:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:30.449 18:32:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.449 18:32:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:30.449 18:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.449 18:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.449 18:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.449 18:32:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:30.449 00:16:30.449 real 0m19.571s 00:16:30.449 user 1m15.356s 00:16:30.449 sys 0m8.078s 00:16:30.449 18:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.449 18:32:37 -- common/autotest_common.sh@10 -- # set +x 00:16:30.449 ************************************ 00:16:30.449 END TEST nvmf_fio_target 00:16:30.449 ************************************ 00:16:30.449 18:32:37 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:30.449 18:32:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.449 18:32:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.449 18:32:37 -- common/autotest_common.sh@10 -- # set +x 00:16:30.449 ************************************ 00:16:30.449 START TEST nvmf_bdevio 00:16:30.449 ************************************ 00:16:30.449 18:32:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:30.708 * Looking for test storage... 00:16:30.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.708 18:32:37 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.708 18:32:37 -- nvmf/common.sh@7 -- # uname -s 00:16:30.709 18:32:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.709 18:32:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.709 18:32:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.709 18:32:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.709 18:32:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.709 18:32:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.709 18:32:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.709 18:32:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.709 18:32:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.709 18:32:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:16:30.709 18:32:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:16:30.709 18:32:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.709 18:32:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.709 18:32:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.709 18:32:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.709 18:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.709 18:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.709 18:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.709 18:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.709 18:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.709 18:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.709 18:32:37 -- paths/export.sh@5 -- # export PATH 00:16:30.709 18:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.709 18:32:37 -- nvmf/common.sh@46 -- # : 0 00:16:30.709 18:32:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.709 18:32:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.709 18:32:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.709 18:32:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.709 18:32:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.709 18:32:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.709 18:32:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.709 18:32:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.709 18:32:37 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.709 18:32:37 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.709 18:32:37 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:30.709 18:32:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.709 18:32:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.709 18:32:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.709 18:32:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.709 18:32:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.709 18:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.709 18:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.709 18:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.709 18:32:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:30.709 18:32:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:30.709 18:32:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.709 18:32:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.709 18:32:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.709 18:32:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:30.709 18:32:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.709 18:32:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.709 18:32:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.709 18:32:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.709 18:32:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.709 18:32:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.709 18:32:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.709 18:32:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.709 18:32:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:30.709 18:32:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:30.709 Cannot find device "nvmf_tgt_br" 00:16:30.709 18:32:37 -- nvmf/common.sh@154 -- # true 00:16:30.709 18:32:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.709 Cannot find device "nvmf_tgt_br2" 00:16:30.709 18:32:37 -- nvmf/common.sh@155 -- # true 00:16:30.709 18:32:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:30.709 18:32:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:30.709 Cannot find device "nvmf_tgt_br" 00:16:30.709 18:32:37 -- nvmf/common.sh@157 -- # true 00:16:30.709 18:32:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:30.709 Cannot find device "nvmf_tgt_br2" 00:16:30.709 18:32:37 -- nvmf/common.sh@158 -- # true 00:16:30.709 18:32:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:30.709 18:32:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:30.709 18:32:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.709 18:32:38 -- nvmf/common.sh@161 -- # true 00:16:30.709 18:32:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.709 18:32:38 -- nvmf/common.sh@162 -- # true 00:16:30.709 18:32:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.709 18:32:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.709 18:32:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.709 18:32:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.709 18:32:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.709 18:32:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.709 18:32:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.709 18:32:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.968 18:32:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.968 18:32:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:30.968 18:32:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:30.968 18:32:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:30.968 18:32:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:30.968 18:32:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.968 18:32:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.968 18:32:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.968 18:32:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:30.968 18:32:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:30.968 18:32:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.968 18:32:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.968 18:32:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.968 18:32:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.968 18:32:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.968 18:32:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:30.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:30.968 00:16:30.968 --- 10.0.0.2 ping statistics --- 00:16:30.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.968 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:30.968 18:32:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:30.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:30.968 00:16:30.969 --- 10.0.0.3 ping statistics --- 00:16:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.969 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:30.969 18:32:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:30.969 00:16:30.969 --- 10.0.0.1 ping statistics --- 00:16:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.969 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:30.969 18:32:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.969 18:32:38 -- nvmf/common.sh@421 -- # return 0 00:16:30.969 18:32:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:30.969 18:32:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.969 18:32:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:30.969 18:32:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:30.969 18:32:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.969 18:32:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:30.969 18:32:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:30.969 18:32:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:30.969 18:32:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:30.969 18:32:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:30.969 18:32:38 -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.969 18:32:38 -- nvmf/common.sh@469 -- # nvmfpid=87456 00:16:30.969 18:32:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:30.969 18:32:38 -- nvmf/common.sh@470 -- # waitforlisten 87456 00:16:30.969 18:32:38 -- common/autotest_common.sh@819 -- # '[' -z 87456 ']' 00:16:30.969 18:32:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.969 18:32:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.969 18:32:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.969 18:32:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.969 18:32:38 -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 [2024-07-14 18:32:38.340393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:30.969 [2024-07-14 18:32:38.340515] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.227 [2024-07-14 18:32:38.481495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.227 [2024-07-14 18:32:38.557371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.227 [2024-07-14 18:32:38.558054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.227 [2024-07-14 18:32:38.558262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.227 [2024-07-14 18:32:38.558856] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.227 [2024-07-14 18:32:38.559347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.227 [2024-07-14 18:32:38.559485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:31.227 [2024-07-14 18:32:38.559620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:31.227 [2024-07-14 18:32:38.559620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.160 18:32:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.160 18:32:39 -- common/autotest_common.sh@852 -- # return 0 00:16:32.160 18:32:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.160 18:32:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 18:32:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.160 18:32:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.160 18:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 [2024-07-14 18:32:39.379469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.160 18:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.160 18:32:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:32.160 18:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 Malloc0 00:16:32.160 18:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.160 18:32:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.160 18:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 18:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.160 18:32:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.160 18:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 18:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.160 18:32:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.160 18:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.160 18:32:39 -- common/autotest_common.sh@10 -- # set +x 00:16:32.160 [2024-07-14 18:32:39.444590] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.160 18:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.160 18:32:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:32.160 18:32:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:32.160 18:32:39 -- nvmf/common.sh@520 -- # config=() 00:16:32.160 18:32:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:32.160 18:32:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:32.160 18:32:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:32.160 { 00:16:32.160 "params": { 00:16:32.160 "name": "Nvme$subsystem", 00:16:32.160 "trtype": "$TEST_TRANSPORT", 00:16:32.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.160 "adrfam": "ipv4", 00:16:32.160 "trsvcid": "$NVMF_PORT", 00:16:32.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.160 "hdgst": ${hdgst:-false}, 00:16:32.160 "ddgst": ${ddgst:-false} 00:16:32.160 }, 00:16:32.160 "method": "bdev_nvme_attach_controller" 00:16:32.160 } 00:16:32.160 EOF 00:16:32.160 )") 00:16:32.160 18:32:39 -- nvmf/common.sh@542 -- # cat 00:16:32.160 18:32:39 -- nvmf/common.sh@544 -- # jq . 00:16:32.160 18:32:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:32.160 18:32:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:32.160 "params": { 00:16:32.160 "name": "Nvme1", 00:16:32.160 "trtype": "tcp", 00:16:32.160 "traddr": "10.0.0.2", 00:16:32.160 "adrfam": "ipv4", 00:16:32.160 "trsvcid": "4420", 00:16:32.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.160 "hdgst": false, 00:16:32.160 "ddgst": false 00:16:32.160 }, 00:16:32.160 "method": "bdev_nvme_attach_controller" 00:16:32.160 }' 00:16:32.160 [2024-07-14 18:32:39.504698] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:32.160 [2024-07-14 18:32:39.505329] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87511 ] 00:16:32.418 [2024-07-14 18:32:39.649459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.418 [2024-07-14 18:32:39.770075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.418 [2024-07-14 18:32:39.770247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.418 [2024-07-14 18:32:39.770257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.675 [2024-07-14 18:32:39.983208] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:32.675 [2024-07-14 18:32:39.983269] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:32.675 I/O targets: 00:16:32.675 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:32.675 00:16:32.675 00:16:32.675 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.675 http://cunit.sourceforge.net/ 00:16:32.675 00:16:32.675 00:16:32.675 Suite: bdevio tests on: Nvme1n1 00:16:32.675 Test: blockdev write read block ...passed 00:16:32.675 Test: blockdev write zeroes read block ...passed 00:16:32.675 Test: blockdev write zeroes read no split ...passed 00:16:32.675 Test: blockdev write zeroes read split ...passed 00:16:32.933 Test: blockdev write zeroes read split partial ...passed 00:16:32.933 Test: blockdev reset ...[2024-07-14 18:32:40.101600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:32.933 [2024-07-14 18:32:40.101906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfa4e0 (9): Bad file descriptor 00:16:32.933 [2024-07-14 18:32:40.121271] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:32.933 passed 00:16:32.933 Test: blockdev write read 8 blocks ...passed 00:16:32.933 Test: blockdev write read size > 128k ...passed 00:16:32.933 Test: blockdev write read invalid size ...passed 00:16:32.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.933 Test: blockdev write read max offset ...passed 00:16:32.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.933 Test: blockdev writev readv 8 blocks ...passed 00:16:32.933 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.933 Test: blockdev writev readv block ...passed 00:16:32.933 Test: blockdev writev readv size > 128k ...passed 00:16:32.933 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.933 Test: blockdev comparev and writev ...[2024-07-14 18:32:40.298741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.298948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.298975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.298989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.299336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.299353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.299369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.299379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.299795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.299820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.299838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.300427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.300456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:32.934 [2024-07-14 18:32:40.300474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.934 [2024-07-14 18:32:40.300526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:32.934 passed 00:16:33.192 Test: blockdev nvme passthru rw ...passed 00:16:33.192 Test: blockdev nvme passthru vendor specific ...[2024-07-14 18:32:40.383135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.192 [2024-07-14 18:32:40.383162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.192 [2024-07-14 18:32:40.383306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.192 [2024-07-14 18:32:40.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.192 [2024-07-14 18:32:40.383457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.192 [2024-07-14 18:32:40.383472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.192 passed 00:16:33.192 Test: blockdev nvme admin passthru ...[2024-07-14 18:32:40.383615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.192 [2024-07-14 18:32:40.383637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.192 passed 00:16:33.192 Test: blockdev copy ...passed 00:16:33.192 00:16:33.192 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.192 suites 1 1 n/a 0 0 00:16:33.192 tests 23 23 23 0 0 00:16:33.192 asserts 152 152 152 0 n/a 00:16:33.192 00:16:33.192 Elapsed time = 0.924 seconds 00:16:33.450 18:32:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.450 18:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.450 18:32:40 -- common/autotest_common.sh@10 -- # set +x 00:16:33.450 18:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.450 18:32:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:33.450 18:32:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:33.450 18:32:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:33.450 18:32:40 -- nvmf/common.sh@116 -- # sync 00:16:33.450 18:32:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:33.450 18:32:40 -- nvmf/common.sh@119 -- # set +e 00:16:33.450 18:32:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:33.450 18:32:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:33.450 rmmod nvme_tcp 00:16:33.450 rmmod nvme_fabrics 00:16:33.450 rmmod nvme_keyring 00:16:33.450 18:32:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:33.450 18:32:40 -- nvmf/common.sh@123 -- # set -e 00:16:33.450 18:32:40 -- nvmf/common.sh@124 -- # return 0 00:16:33.450 18:32:40 -- nvmf/common.sh@477 -- # '[' -n 87456 ']' 00:16:33.450 18:32:40 -- nvmf/common.sh@478 -- # killprocess 87456 00:16:33.450 18:32:40 -- common/autotest_common.sh@926 -- # '[' -z 87456 ']' 00:16:33.450 18:32:40 -- common/autotest_common.sh@930 -- # kill -0 87456 00:16:33.450 18:32:40 -- common/autotest_common.sh@931 -- # uname 00:16:33.450 18:32:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:33.450 18:32:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87456 00:16:33.450 killing process with pid 87456 00:16:33.450 18:32:40 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:33.450 18:32:40 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:33.450 18:32:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87456' 00:16:33.450 18:32:40 -- common/autotest_common.sh@945 -- # kill 87456 00:16:33.450 18:32:40 -- common/autotest_common.sh@950 -- # wait 87456 00:16:33.708 18:32:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:33.708 18:32:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:33.708 18:32:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:33.708 18:32:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.708 18:32:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:33.708 18:32:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.708 18:32:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.708 18:32:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.966 18:32:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:33.966 00:16:33.966 real 0m3.326s 00:16:33.966 user 0m12.432s 00:16:33.966 sys 0m0.869s 00:16:33.966 18:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.966 18:32:41 -- common/autotest_common.sh@10 -- # set +x 00:16:33.966 ************************************ 00:16:33.966 END TEST nvmf_bdevio 00:16:33.966 ************************************ 00:16:33.966 18:32:41 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:33.966 18:32:41 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:33.966 18:32:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:33.966 18:32:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.966 18:32:41 -- common/autotest_common.sh@10 -- # set +x 00:16:33.966 ************************************ 00:16:33.966 START TEST nvmf_bdevio_no_huge 00:16:33.966 ************************************ 00:16:33.966 18:32:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:33.966 * Looking for test storage... 00:16:33.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.966 18:32:41 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.966 18:32:41 -- nvmf/common.sh@7 -- # uname -s 00:16:33.966 18:32:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.966 18:32:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.966 18:32:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.966 18:32:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.966 18:32:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.966 18:32:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.966 18:32:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.966 18:32:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.966 18:32:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.966 18:32:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:16:33.966 18:32:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:16:33.966 18:32:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.966 18:32:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.966 18:32:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.966 18:32:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.966 18:32:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.966 18:32:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.966 18:32:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.966 18:32:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.966 18:32:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.966 18:32:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.966 18:32:41 -- paths/export.sh@5 -- # export PATH 00:16:33.966 18:32:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.966 18:32:41 -- nvmf/common.sh@46 -- # : 0 00:16:33.966 18:32:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:33.966 18:32:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:33.966 18:32:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:33.966 18:32:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.966 18:32:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.966 18:32:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:33.966 18:32:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:33.966 18:32:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:33.966 18:32:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.966 18:32:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.966 18:32:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:33.966 18:32:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:33.966 18:32:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.966 18:32:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:33.966 18:32:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:33.966 18:32:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:33.966 18:32:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.966 18:32:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.966 18:32:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.966 18:32:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:33.966 18:32:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:33.966 18:32:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.966 18:32:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.966 18:32:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.966 18:32:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:33.966 18:32:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.966 18:32:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.966 18:32:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.966 18:32:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.966 18:32:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.966 18:32:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.966 18:32:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.966 18:32:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.966 18:32:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:33.966 18:32:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:33.966 Cannot find device "nvmf_tgt_br" 00:16:33.966 18:32:41 -- nvmf/common.sh@154 -- # true 00:16:33.966 18:32:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.966 Cannot find device "nvmf_tgt_br2" 00:16:33.966 18:32:41 -- nvmf/common.sh@155 -- # true 00:16:33.966 18:32:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:33.966 18:32:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:33.967 Cannot find device "nvmf_tgt_br" 00:16:33.967 18:32:41 -- nvmf/common.sh@157 -- # true 00:16:33.967 18:32:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:33.967 Cannot find device "nvmf_tgt_br2" 00:16:33.967 18:32:41 -- nvmf/common.sh@158 -- # true 00:16:33.967 18:32:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:34.224 18:32:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:34.224 18:32:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.224 18:32:41 -- nvmf/common.sh@161 -- # true 00:16:34.224 18:32:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.224 18:32:41 -- nvmf/common.sh@162 -- # true 00:16:34.224 18:32:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.224 18:32:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.224 18:32:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.224 18:32:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.224 18:32:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.224 18:32:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.224 18:32:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.224 18:32:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:34.224 18:32:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:34.224 18:32:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:34.224 18:32:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:34.224 18:32:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:34.224 18:32:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:34.224 18:32:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.224 18:32:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.224 18:32:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.224 18:32:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:34.224 18:32:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:34.224 18:32:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.224 18:32:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.224 18:32:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.483 18:32:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.483 18:32:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.483 18:32:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:34.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:16:34.483 00:16:34.483 --- 10.0.0.2 ping statistics --- 00:16:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.483 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:34.483 18:32:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:34.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:34.483 00:16:34.483 --- 10.0.0.3 ping statistics --- 00:16:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.483 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:34.483 18:32:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:34.483 00:16:34.483 --- 10.0.0.1 ping statistics --- 00:16:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.483 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:34.483 18:32:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.483 18:32:41 -- nvmf/common.sh@421 -- # return 0 00:16:34.483 18:32:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:34.483 18:32:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.483 18:32:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:34.483 18:32:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:34.483 18:32:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.483 18:32:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:34.483 18:32:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:34.483 18:32:41 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:34.483 18:32:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:34.483 18:32:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.483 18:32:41 -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.483 18:32:41 -- nvmf/common.sh@469 -- # nvmfpid=87692 00:16:34.483 18:32:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:34.483 18:32:41 -- nvmf/common.sh@470 -- # waitforlisten 87692 00:16:34.483 18:32:41 -- common/autotest_common.sh@819 -- # '[' -z 87692 ']' 00:16:34.483 18:32:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.483 18:32:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.483 18:32:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.483 18:32:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.483 18:32:41 -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 [2024-07-14 18:32:41.742750] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:34.483 [2024-07-14 18:32:41.742825] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:34.483 [2024-07-14 18:32:41.883400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.742 [2024-07-14 18:32:41.994945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.742 [2024-07-14 18:32:41.995122] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.742 [2024-07-14 18:32:41.995139] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.742 [2024-07-14 18:32:41.995150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.742 [2024-07-14 18:32:41.995428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:34.742 [2024-07-14 18:32:41.996150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:34.742 [2024-07-14 18:32:41.996274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:34.742 [2024-07-14 18:32:41.996283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.308 18:32:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.308 18:32:42 -- common/autotest_common.sh@852 -- # return 0 00:16:35.308 18:32:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.308 18:32:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.308 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 18:32:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.308 18:32:42 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.308 18:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.308 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 [2024-07-14 18:32:42.716600] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.308 18:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.308 18:32:42 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.567 18:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.567 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.567 Malloc0 00:16:35.567 18:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.567 18:32:42 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.567 18:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.567 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.567 18:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.567 18:32:42 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.567 18:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.567 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.567 18:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.567 18:32:42 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.567 18:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.567 18:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:35.567 [2024-07-14 18:32:42.760805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.567 18:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.567 18:32:42 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:35.567 18:32:42 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:35.567 18:32:42 -- nvmf/common.sh@520 -- # config=() 00:16:35.567 18:32:42 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.567 18:32:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.567 18:32:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.567 { 00:16:35.567 "params": { 00:16:35.567 "name": "Nvme$subsystem", 00:16:35.567 "trtype": "$TEST_TRANSPORT", 00:16:35.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.567 "adrfam": "ipv4", 00:16:35.567 "trsvcid": "$NVMF_PORT", 00:16:35.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.567 "hdgst": ${hdgst:-false}, 00:16:35.567 "ddgst": ${ddgst:-false} 00:16:35.567 }, 00:16:35.567 "method": "bdev_nvme_attach_controller" 00:16:35.567 } 00:16:35.567 EOF 00:16:35.567 )") 00:16:35.567 18:32:42 -- nvmf/common.sh@542 -- # cat 00:16:35.567 18:32:42 -- nvmf/common.sh@544 -- # jq . 00:16:35.567 18:32:42 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.567 18:32:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.567 "params": { 00:16:35.567 "name": "Nvme1", 00:16:35.567 "trtype": "tcp", 00:16:35.567 "traddr": "10.0.0.2", 00:16:35.567 "adrfam": "ipv4", 00:16:35.567 "trsvcid": "4420", 00:16:35.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.567 "hdgst": false, 00:16:35.567 "ddgst": false 00:16:35.567 }, 00:16:35.567 "method": "bdev_nvme_attach_controller" 00:16:35.567 }' 00:16:35.567 [2024-07-14 18:32:42.817431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:35.567 [2024-07-14 18:32:42.818066] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87746 ] 00:16:35.567 [2024-07-14 18:32:42.962682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.827 [2024-07-14 18:32:43.080654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.827 [2024-07-14 18:32:43.080749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.827 [2024-07-14 18:32:43.080747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.085 [2024-07-14 18:32:43.251119] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:36.085 [2024-07-14 18:32:43.251482] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:36.085 I/O targets: 00:16:36.085 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:36.085 00:16:36.085 00:16:36.085 CUnit - A unit testing framework for C - Version 2.1-3 00:16:36.085 http://cunit.sourceforge.net/ 00:16:36.085 00:16:36.085 00:16:36.085 Suite: bdevio tests on: Nvme1n1 00:16:36.085 Test: blockdev write read block ...passed 00:16:36.085 Test: blockdev write zeroes read block ...passed 00:16:36.085 Test: blockdev write zeroes read no split ...passed 00:16:36.085 Test: blockdev write zeroes read split ...passed 00:16:36.085 Test: blockdev write zeroes read split partial ...passed 00:16:36.085 Test: blockdev reset ...[2024-07-14 18:32:43.383872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:36.085 [2024-07-14 18:32:43.384437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfc160 (9): Bad file descriptor 00:16:36.085 [2024-07-14 18:32:43.400833] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:36.085 passed 00:16:36.085 Test: blockdev write read 8 blocks ...passed 00:16:36.085 Test: blockdev write read size > 128k ...passed 00:16:36.085 Test: blockdev write read invalid size ...passed 00:16:36.085 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.085 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.085 Test: blockdev write read max offset ...passed 00:16:36.343 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.343 Test: blockdev writev readv 8 blocks ...passed 00:16:36.343 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.343 Test: blockdev writev readv block ...passed 00:16:36.343 Test: blockdev writev readv size > 128k ...passed 00:16:36.343 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.343 Test: blockdev comparev and writev ...[2024-07-14 18:32:43.579775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.580010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.580059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.580085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.580530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.580553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.580585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.580599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.581183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.581230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.581263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.581276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.581686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.581709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.343 [2024-07-14 18:32:43.581721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:36.343 passed 00:16:36.343 Test: blockdev nvme passthru rw ...passed 00:16:36.343 Test: blockdev nvme passthru vendor specific ...[2024-07-14 18:32:43.666066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.343 [2024-07-14 18:32:43.666092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.666225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.343 [2024-07-14 18:32:43.666240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:36.343 [2024-07-14 18:32:43.666359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.343 [2024-07-14 18:32:43.666374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:36.343 passed 00:16:36.343 Test: blockdev nvme admin passthru ...[2024-07-14 18:32:43.666482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.343 [2024-07-14 18:32:43.666515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:36.343 passed 00:16:36.343 Test: blockdev copy ...passed 00:16:36.343 00:16:36.343 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.343 suites 1 1 n/a 0 0 00:16:36.343 tests 23 23 23 0 0 00:16:36.343 asserts 152 152 152 0 n/a 00:16:36.343 00:16:36.343 Elapsed time = 0.943 seconds 00:16:36.909 18:32:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.909 18:32:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.909 18:32:44 -- common/autotest_common.sh@10 -- # set +x 00:16:36.909 18:32:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.909 18:32:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:36.909 18:32:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:36.909 18:32:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.909 18:32:44 -- nvmf/common.sh@116 -- # sync 00:16:36.909 18:32:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.909 18:32:44 -- nvmf/common.sh@119 -- # set +e 00:16:36.909 18:32:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.909 18:32:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.909 rmmod nvme_tcp 00:16:36.909 rmmod nvme_fabrics 00:16:36.909 rmmod nvme_keyring 00:16:36.909 18:32:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.909 18:32:44 -- nvmf/common.sh@123 -- # set -e 00:16:36.909 18:32:44 -- nvmf/common.sh@124 -- # return 0 00:16:36.909 18:32:44 -- nvmf/common.sh@477 -- # '[' -n 87692 ']' 00:16:36.909 18:32:44 -- nvmf/common.sh@478 -- # killprocess 87692 00:16:36.909 18:32:44 -- common/autotest_common.sh@926 -- # '[' -z 87692 ']' 00:16:36.909 18:32:44 -- common/autotest_common.sh@930 -- # kill -0 87692 00:16:36.909 18:32:44 -- common/autotest_common.sh@931 -- # uname 00:16:36.909 18:32:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.909 18:32:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87692 00:16:36.909 killing process with pid 87692 00:16:36.909 18:32:44 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:36.910 18:32:44 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:36.910 18:32:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87692' 00:16:36.910 18:32:44 -- common/autotest_common.sh@945 -- # kill 87692 00:16:36.910 18:32:44 -- common/autotest_common.sh@950 -- # wait 87692 00:16:37.477 18:32:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.477 18:32:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.477 18:32:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.477 18:32:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.477 18:32:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.477 18:32:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.477 18:32:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:37.477 00:16:37.477 real 0m3.440s 00:16:37.477 user 0m12.306s 00:16:37.477 sys 0m1.296s 00:16:37.477 18:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.477 18:32:44 -- common/autotest_common.sh@10 -- # set +x 00:16:37.477 ************************************ 00:16:37.477 END TEST nvmf_bdevio_no_huge 00:16:37.477 ************************************ 00:16:37.477 18:32:44 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:37.477 18:32:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:37.477 18:32:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.477 18:32:44 -- common/autotest_common.sh@10 -- # set +x 00:16:37.477 ************************************ 00:16:37.477 START TEST nvmf_tls 00:16:37.477 ************************************ 00:16:37.477 18:32:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:37.477 * Looking for test storage... 00:16:37.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:37.477 18:32:44 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.477 18:32:44 -- nvmf/common.sh@7 -- # uname -s 00:16:37.477 18:32:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.477 18:32:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.477 18:32:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.477 18:32:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.477 18:32:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.477 18:32:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.477 18:32:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.477 18:32:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.477 18:32:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.477 18:32:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:16:37.477 18:32:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:16:37.477 18:32:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.477 18:32:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.477 18:32:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.477 18:32:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.477 18:32:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.477 18:32:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.477 18:32:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.477 18:32:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.477 18:32:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.477 18:32:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.477 18:32:44 -- paths/export.sh@5 -- # export PATH 00:16:37.477 18:32:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.477 18:32:44 -- nvmf/common.sh@46 -- # : 0 00:16:37.477 18:32:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:37.477 18:32:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:37.477 18:32:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:37.477 18:32:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.477 18:32:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.477 18:32:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:37.477 18:32:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:37.477 18:32:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:37.477 18:32:44 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.477 18:32:44 -- target/tls.sh@71 -- # nvmftestinit 00:16:37.477 18:32:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:37.477 18:32:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.477 18:32:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:37.477 18:32:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:37.477 18:32:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:37.477 18:32:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.477 18:32:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.477 18:32:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.477 18:32:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:37.477 18:32:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:37.477 18:32:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.477 18:32:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.477 18:32:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.477 18:32:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:37.477 18:32:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.477 18:32:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.477 18:32:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.477 18:32:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.477 18:32:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.477 18:32:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.477 18:32:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.477 18:32:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.477 18:32:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:37.477 18:32:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:37.477 Cannot find device "nvmf_tgt_br" 00:16:37.477 18:32:44 -- nvmf/common.sh@154 -- # true 00:16:37.477 18:32:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.477 Cannot find device "nvmf_tgt_br2" 00:16:37.477 18:32:44 -- nvmf/common.sh@155 -- # true 00:16:37.477 18:32:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:37.478 18:32:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:37.478 Cannot find device "nvmf_tgt_br" 00:16:37.478 18:32:44 -- nvmf/common.sh@157 -- # true 00:16:37.478 18:32:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:37.478 Cannot find device "nvmf_tgt_br2" 00:16:37.478 18:32:44 -- nvmf/common.sh@158 -- # true 00:16:37.478 18:32:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:37.736 18:32:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:37.736 18:32:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.737 18:32:44 -- nvmf/common.sh@161 -- # true 00:16:37.737 18:32:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.737 18:32:44 -- nvmf/common.sh@162 -- # true 00:16:37.737 18:32:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.737 18:32:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.737 18:32:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.737 18:32:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.737 18:32:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.737 18:32:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.737 18:32:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.737 18:32:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.737 18:32:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:37.737 18:32:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:37.737 18:32:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:37.737 18:32:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:37.737 18:32:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:37.737 18:32:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.737 18:32:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.737 18:32:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.737 18:32:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:37.737 18:32:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:37.737 18:32:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.737 18:32:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.737 18:32:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.737 18:32:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.737 18:32:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.737 18:32:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:37.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:37.737 00:16:37.737 --- 10.0.0.2 ping statistics --- 00:16:37.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.737 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:37.737 18:32:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:37.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:37.737 00:16:37.737 --- 10.0.0.3 ping statistics --- 00:16:37.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.737 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:37.737 18:32:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:37.737 00:16:37.737 --- 10.0.0.1 ping statistics --- 00:16:37.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.737 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:37.737 18:32:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.737 18:32:45 -- nvmf/common.sh@421 -- # return 0 00:16:37.737 18:32:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:37.737 18:32:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.737 18:32:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:37.737 18:32:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:37.737 18:32:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.737 18:32:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:37.737 18:32:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:37.996 18:32:45 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:37.996 18:32:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:37.996 18:32:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:37.996 18:32:45 -- common/autotest_common.sh@10 -- # set +x 00:16:37.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.996 18:32:45 -- nvmf/common.sh@469 -- # nvmfpid=87928 00:16:37.996 18:32:45 -- nvmf/common.sh@470 -- # waitforlisten 87928 00:16:37.996 18:32:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:37.996 18:32:45 -- common/autotest_common.sh@819 -- # '[' -z 87928 ']' 00:16:37.996 18:32:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.996 18:32:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.996 18:32:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.996 18:32:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.996 18:32:45 -- common/autotest_common.sh@10 -- # set +x 00:16:37.996 [2024-07-14 18:32:45.224311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:37.996 [2024-07-14 18:32:45.224405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.996 [2024-07-14 18:32:45.367157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.254 [2024-07-14 18:32:45.442305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.254 [2024-07-14 18:32:45.442482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.254 [2024-07-14 18:32:45.442519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.254 [2024-07-14 18:32:45.442532] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.254 [2024-07-14 18:32:45.442573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.820 18:32:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:38.820 18:32:46 -- common/autotest_common.sh@852 -- # return 0 00:16:38.820 18:32:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:38.820 18:32:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:38.820 18:32:46 -- common/autotest_common.sh@10 -- # set +x 00:16:38.820 18:32:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.820 18:32:46 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:38.820 18:32:46 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:39.078 true 00:16:39.078 18:32:46 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:39.078 18:32:46 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:39.337 18:32:46 -- target/tls.sh@82 -- # version=0 00:16:39.337 18:32:46 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:39.337 18:32:46 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:39.595 18:32:46 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:39.595 18:32:46 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:39.853 18:32:47 -- target/tls.sh@90 -- # version=13 00:16:39.853 18:32:47 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:39.853 18:32:47 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:40.111 18:32:47 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:40.111 18:32:47 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:40.368 18:32:47 -- target/tls.sh@98 -- # version=7 00:16:40.368 18:32:47 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:40.368 18:32:47 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:40.368 18:32:47 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:40.625 18:32:47 -- target/tls.sh@105 -- # ktls=false 00:16:40.625 18:32:47 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:40.625 18:32:47 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:40.882 18:32:48 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:40.882 18:32:48 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:41.140 18:32:48 -- target/tls.sh@113 -- # ktls=true 00:16:41.140 18:32:48 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:41.140 18:32:48 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:41.398 18:32:48 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:41.398 18:32:48 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:41.656 18:32:48 -- target/tls.sh@121 -- # ktls=false 00:16:41.656 18:32:48 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:41.656 18:32:48 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:41.656 18:32:48 -- target/tls.sh@49 -- # local key hash crc 00:16:41.656 18:32:48 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:41.656 18:32:48 -- target/tls.sh@51 -- # hash=01 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # gzip -1 -c 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # tail -c8 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # head -c 4 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # crc='p$H�' 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:41.656 18:32:48 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:41.656 18:32:48 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:41.656 18:32:48 -- target/tls.sh@49 -- # local key hash crc 00:16:41.656 18:32:48 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:41.656 18:32:48 -- target/tls.sh@51 -- # hash=01 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # tail -c8 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # gzip -1 -c 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # head -c 4 00:16:41.656 18:32:48 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:41.656 18:32:48 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:41.656 18:32:48 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:41.656 18:32:48 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.656 18:32:48 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:41.656 18:32:48 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:41.656 18:32:48 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:41.656 18:32:48 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.656 18:32:48 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:41.656 18:32:48 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:41.914 18:32:49 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:42.171 18:32:49 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:42.171 18:32:49 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:42.171 18:32:49 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:42.429 [2024-07-14 18:32:49.603228] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.430 18:32:49 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:42.430 18:32:49 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:42.688 [2024-07-14 18:32:50.007336] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.688 [2024-07-14 18:32:50.007585] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.688 18:32:50 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.946 malloc0 00:16:42.946 18:32:50 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:43.205 18:32:50 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.464 18:32:50 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.681 Initializing NVMe Controllers 00:16:55.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:55.681 Initialization complete. Launching workers. 00:16:55.681 ======================================================== 00:16:55.681 Latency(us) 00:16:55.681 Device Information : IOPS MiB/s Average min max 00:16:55.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10571.48 41.29 6055.43 855.69 9850.30 00:16:55.681 ======================================================== 00:16:55.681 Total : 10571.48 41.29 6055.43 855.69 9850.30 00:16:55.681 00:16:55.681 18:33:00 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.681 18:33:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.681 18:33:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:55.681 18:33:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.681 18:33:00 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:55.681 18:33:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.681 18:33:00 -- target/tls.sh@28 -- # bdevperf_pid=88293 00:16:55.681 18:33:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.681 18:33:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.681 18:33:00 -- target/tls.sh@31 -- # waitforlisten 88293 /var/tmp/bdevperf.sock 00:16:55.681 18:33:00 -- common/autotest_common.sh@819 -- # '[' -z 88293 ']' 00:16:55.681 18:33:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.681 18:33:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.681 18:33:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.681 18:33:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.681 18:33:00 -- common/autotest_common.sh@10 -- # set +x 00:16:55.681 [2024-07-14 18:33:00.944129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:55.681 [2024-07-14 18:33:00.944241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88293 ] 00:16:55.681 [2024-07-14 18:33:01.087334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.681 [2024-07-14 18:33:01.172373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.681 18:33:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.681 18:33:01 -- common/autotest_common.sh@852 -- # return 0 00:16:55.681 18:33:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.681 [2024-07-14 18:33:02.074061] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.681 TLSTESTn1 00:16:55.681 18:33:02 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.681 Running I/O for 10 seconds... 00:17:05.662 00:17:05.662 Latency(us) 00:17:05.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.662 Verification LBA range: start 0x0 length 0x2000 00:17:05.662 TLSTESTn1 : 10.02 6033.40 23.57 0.00 0.00 21180.82 4408.79 24307.90 00:17:05.662 =================================================================================================================== 00:17:05.662 Total : 6033.40 23.57 0.00 0.00 21180.82 4408.79 24307.90 00:17:05.662 0 00:17:05.662 18:33:12 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.662 18:33:12 -- target/tls.sh@45 -- # killprocess 88293 00:17:05.662 18:33:12 -- common/autotest_common.sh@926 -- # '[' -z 88293 ']' 00:17:05.662 18:33:12 -- common/autotest_common.sh@930 -- # kill -0 88293 00:17:05.662 18:33:12 -- common/autotest_common.sh@931 -- # uname 00:17:05.662 18:33:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.662 18:33:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88293 00:17:05.662 killing process with pid 88293 00:17:05.662 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.662 00:17:05.662 Latency(us) 00:17:05.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.662 =================================================================================================================== 00:17:05.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.662 18:33:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:05.662 18:33:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:05.662 18:33:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88293' 00:17:05.662 18:33:12 -- common/autotest_common.sh@945 -- # kill 88293 00:17:05.662 18:33:12 -- common/autotest_common.sh@950 -- # wait 88293 00:17:05.663 18:33:12 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:05.663 18:33:12 -- common/autotest_common.sh@640 -- # local es=0 00:17:05.663 18:33:12 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:05.663 18:33:12 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:05.663 18:33:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.663 18:33:12 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:05.663 18:33:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.663 18:33:12 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:05.663 18:33:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.663 18:33:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.663 18:33:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.663 18:33:12 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:05.663 18:33:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.663 18:33:12 -- target/tls.sh@28 -- # bdevperf_pid=88441 00:17:05.663 18:33:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.663 18:33:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.663 18:33:12 -- target/tls.sh@31 -- # waitforlisten 88441 /var/tmp/bdevperf.sock 00:17:05.663 18:33:12 -- common/autotest_common.sh@819 -- # '[' -z 88441 ']' 00:17:05.663 18:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.663 18:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.663 18:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.663 18:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.663 18:33:12 -- common/autotest_common.sh@10 -- # set +x 00:17:05.663 [2024-07-14 18:33:12.565068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:05.663 [2024-07-14 18:33:12.565359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88441 ] 00:17:05.663 [2024-07-14 18:33:12.705039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.663 [2024-07-14 18:33:12.781919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.229 18:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.229 18:33:13 -- common/autotest_common.sh@852 -- # return 0 00:17:06.229 18:33:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:06.487 [2024-07-14 18:33:13.707901] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.487 [2024-07-14 18:33:13.719419] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:06.487 [2024-07-14 18:33:13.719761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e9c0 (107): Transport endpoint is not connected 00:17:06.487 [2024-07-14 18:33:13.720735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e9c0 (9): Bad file descriptor 00:17:06.487 [2024-07-14 18:33:13.721732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:06.487 [2024-07-14 18:33:13.721769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:06.487 [2024-07-14 18:33:13.721779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:06.487 2024/07/14 18:33:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:06.487 request: 00:17:06.487 { 00:17:06.487 "method": "bdev_nvme_attach_controller", 00:17:06.487 "params": { 00:17:06.487 "name": "TLSTEST", 00:17:06.487 "trtype": "tcp", 00:17:06.487 "traddr": "10.0.0.2", 00:17:06.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.487 "adrfam": "ipv4", 00:17:06.487 "trsvcid": "4420", 00:17:06.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.487 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:06.487 } 00:17:06.487 } 00:17:06.487 Got JSON-RPC error response 00:17:06.487 GoRPCClient: error on JSON-RPC call 00:17:06.487 18:33:13 -- target/tls.sh@36 -- # killprocess 88441 00:17:06.487 18:33:13 -- common/autotest_common.sh@926 -- # '[' -z 88441 ']' 00:17:06.487 18:33:13 -- common/autotest_common.sh@930 -- # kill -0 88441 00:17:06.487 18:33:13 -- common/autotest_common.sh@931 -- # uname 00:17:06.487 18:33:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.487 18:33:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88441 00:17:06.487 killing process with pid 88441 00:17:06.487 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.487 00:17:06.487 Latency(us) 00:17:06.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.487 =================================================================================================================== 00:17:06.487 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.487 18:33:13 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:06.487 18:33:13 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:06.487 18:33:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88441' 00:17:06.487 18:33:13 -- common/autotest_common.sh@945 -- # kill 88441 00:17:06.487 18:33:13 -- common/autotest_common.sh@950 -- # wait 88441 00:17:06.757 18:33:13 -- target/tls.sh@37 -- # return 1 00:17:06.757 18:33:13 -- common/autotest_common.sh@643 -- # es=1 00:17:06.757 18:33:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.757 18:33:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.757 18:33:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.757 18:33:13 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:06.757 18:33:13 -- common/autotest_common.sh@640 -- # local es=0 00:17:06.757 18:33:13 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:06.757 18:33:13 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:06.757 18:33:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.757 18:33:13 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:06.757 18:33:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.757 18:33:13 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:06.757 18:33:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.757 18:33:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.758 18:33:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:06.758 18:33:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:06.758 18:33:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.758 18:33:13 -- target/tls.sh@28 -- # bdevperf_pid=88492 00:17:06.758 18:33:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.758 18:33:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.758 18:33:13 -- target/tls.sh@31 -- # waitforlisten 88492 /var/tmp/bdevperf.sock 00:17:06.758 18:33:13 -- common/autotest_common.sh@819 -- # '[' -z 88492 ']' 00:17:06.758 18:33:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.758 18:33:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.758 18:33:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.758 18:33:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.758 18:33:13 -- common/autotest_common.sh@10 -- # set +x 00:17:06.758 [2024-07-14 18:33:14.015930] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:06.758 [2024-07-14 18:33:14.016079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88492 ] 00:17:06.758 [2024-07-14 18:33:14.148889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.016 [2024-07-14 18:33:14.215101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.581 18:33:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.581 18:33:14 -- common/autotest_common.sh@852 -- # return 0 00:17:07.581 18:33:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:07.840 [2024-07-14 18:33:15.243333] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.840 [2024-07-14 18:33:15.249431] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:07.840 [2024-07-14 18:33:15.249483] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:07.840 [2024-07-14 18:33:15.249576] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:07.840 [2024-07-14 18:33:15.250116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10819c0 (107): Transport endpoint is not connected 00:17:07.840 [2024-07-14 18:33:15.251108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10819c0 (9): Bad file descriptor 00:17:07.840 [2024-07-14 18:33:15.252105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:07.840 [2024-07-14 18:33:15.252142] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:07.840 [2024-07-14 18:33:15.252167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:07.840 2024/07/14 18:33:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:07.840 request: 00:17:07.840 { 00:17:07.840 "method": "bdev_nvme_attach_controller", 00:17:07.840 "params": { 00:17:07.840 "name": "TLSTEST", 00:17:07.840 "trtype": "tcp", 00:17:07.840 "traddr": "10.0.0.2", 00:17:07.840 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:07.840 "adrfam": "ipv4", 00:17:07.840 "trsvcid": "4420", 00:17:07.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.840 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:07.840 } 00:17:07.840 } 00:17:07.840 Got JSON-RPC error response 00:17:07.840 GoRPCClient: error on JSON-RPC call 00:17:08.099 18:33:15 -- target/tls.sh@36 -- # killprocess 88492 00:17:08.099 18:33:15 -- common/autotest_common.sh@926 -- # '[' -z 88492 ']' 00:17:08.099 18:33:15 -- common/autotest_common.sh@930 -- # kill -0 88492 00:17:08.099 18:33:15 -- common/autotest_common.sh@931 -- # uname 00:17:08.099 18:33:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.099 18:33:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88492 00:17:08.099 killing process with pid 88492 00:17:08.099 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.099 00:17:08.099 Latency(us) 00:17:08.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.099 =================================================================================================================== 00:17:08.099 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.099 18:33:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:08.099 18:33:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:08.099 18:33:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88492' 00:17:08.099 18:33:15 -- common/autotest_common.sh@945 -- # kill 88492 00:17:08.099 18:33:15 -- common/autotest_common.sh@950 -- # wait 88492 00:17:08.099 18:33:15 -- target/tls.sh@37 -- # return 1 00:17:08.099 18:33:15 -- common/autotest_common.sh@643 -- # es=1 00:17:08.099 18:33:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:08.099 18:33:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:08.099 18:33:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:08.099 18:33:15 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.099 18:33:15 -- common/autotest_common.sh@640 -- # local es=0 00:17:08.099 18:33:15 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.099 18:33:15 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:08.099 18:33:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.099 18:33:15 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:08.099 18:33:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.099 18:33:15 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.099 18:33:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.099 18:33:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:08.099 18:33:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:08.099 18:33:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:08.099 18:33:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.099 18:33:15 -- target/tls.sh@28 -- # bdevperf_pid=88532 00:17:08.099 18:33:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.099 18:33:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.099 18:33:15 -- target/tls.sh@31 -- # waitforlisten 88532 /var/tmp/bdevperf.sock 00:17:08.099 18:33:15 -- common/autotest_common.sh@819 -- # '[' -z 88532 ']' 00:17:08.099 18:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.099 18:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.099 18:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.099 18:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.099 18:33:15 -- common/autotest_common.sh@10 -- # set +x 00:17:08.358 [2024-07-14 18:33:15.549340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:08.358 [2024-07-14 18:33:15.549445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88532 ] 00:17:08.358 [2024-07-14 18:33:15.689184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.358 [2024-07-14 18:33:15.764847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.293 18:33:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.293 18:33:16 -- common/autotest_common.sh@852 -- # return 0 00:17:09.293 18:33:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.293 [2024-07-14 18:33:16.671106] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.293 [2024-07-14 18:33:16.679740] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:09.293 [2024-07-14 18:33:16.679782] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:09.293 [2024-07-14 18:33:16.679833] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:09.293 [2024-07-14 18:33:16.679834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12909c0 (107): Transport endpoint is not connected 00:17:09.293 [2024-07-14 18:33:16.680820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12909c0 (9): Bad file descriptor 00:17:09.293 [2024-07-14 18:33:16.681817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:09.293 [2024-07-14 18:33:16.681853] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:09.293 [2024-07-14 18:33:16.681878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:09.293 2024/07/14 18:33:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:09.293 request: 00:17:09.293 { 00:17:09.293 "method": "bdev_nvme_attach_controller", 00:17:09.293 "params": { 00:17:09.293 "name": "TLSTEST", 00:17:09.293 "trtype": "tcp", 00:17:09.293 "traddr": "10.0.0.2", 00:17:09.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.293 "adrfam": "ipv4", 00:17:09.293 "trsvcid": "4420", 00:17:09.293 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:09.293 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:09.293 } 00:17:09.293 } 00:17:09.293 Got JSON-RPC error response 00:17:09.293 GoRPCClient: error on JSON-RPC call 00:17:09.293 18:33:16 -- target/tls.sh@36 -- # killprocess 88532 00:17:09.293 18:33:16 -- common/autotest_common.sh@926 -- # '[' -z 88532 ']' 00:17:09.293 18:33:16 -- common/autotest_common.sh@930 -- # kill -0 88532 00:17:09.293 18:33:16 -- common/autotest_common.sh@931 -- # uname 00:17:09.293 18:33:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:09.293 18:33:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88532 00:17:09.551 killing process with pid 88532 00:17:09.551 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.551 00:17:09.551 Latency(us) 00:17:09.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.551 =================================================================================================================== 00:17:09.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:09.551 18:33:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:09.551 18:33:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:09.551 18:33:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88532' 00:17:09.551 18:33:16 -- common/autotest_common.sh@945 -- # kill 88532 00:17:09.551 18:33:16 -- common/autotest_common.sh@950 -- # wait 88532 00:17:09.551 18:33:16 -- target/tls.sh@37 -- # return 1 00:17:09.551 18:33:16 -- common/autotest_common.sh@643 -- # es=1 00:17:09.551 18:33:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:09.551 18:33:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:09.551 18:33:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:09.551 18:33:16 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:09.551 18:33:16 -- common/autotest_common.sh@640 -- # local es=0 00:17:09.551 18:33:16 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:09.551 18:33:16 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:09.551 18:33:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:09.551 18:33:16 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:09.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.551 18:33:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:09.551 18:33:16 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:09.551 18:33:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.551 18:33:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.551 18:33:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:09.551 18:33:16 -- target/tls.sh@23 -- # psk= 00:17:09.551 18:33:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.551 18:33:16 -- target/tls.sh@28 -- # bdevperf_pid=88579 00:17:09.551 18:33:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.551 18:33:16 -- target/tls.sh@31 -- # waitforlisten 88579 /var/tmp/bdevperf.sock 00:17:09.551 18:33:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.551 18:33:16 -- common/autotest_common.sh@819 -- # '[' -z 88579 ']' 00:17:09.551 18:33:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.551 18:33:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.551 18:33:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.551 18:33:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.551 18:33:16 -- common/autotest_common.sh@10 -- # set +x 00:17:09.809 [2024-07-14 18:33:16.976084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:09.809 [2024-07-14 18:33:16.976186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88579 ] 00:17:09.809 [2024-07-14 18:33:17.113568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.809 [2024-07-14 18:33:17.177531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.744 18:33:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.744 18:33:17 -- common/autotest_common.sh@852 -- # return 0 00:17:10.744 18:33:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:10.744 [2024-07-14 18:33:18.050040] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:10.744 [2024-07-14 18:33:18.051395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:17:10.744 [2024-07-14 18:33:18.052391] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:10.744 [2024-07-14 18:33:18.052414] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:10.744 [2024-07-14 18:33:18.052424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.744 2024/07/14 18:33:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:10.744 request: 00:17:10.744 { 00:17:10.744 "method": "bdev_nvme_attach_controller", 00:17:10.744 "params": { 00:17:10.744 "name": "TLSTEST", 00:17:10.744 "trtype": "tcp", 00:17:10.744 "traddr": "10.0.0.2", 00:17:10.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.744 "adrfam": "ipv4", 00:17:10.744 "trsvcid": "4420", 00:17:10.744 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:10.744 } 00:17:10.744 } 00:17:10.744 Got JSON-RPC error response 00:17:10.744 GoRPCClient: error on JSON-RPC call 00:17:10.744 18:33:18 -- target/tls.sh@36 -- # killprocess 88579 00:17:10.744 18:33:18 -- common/autotest_common.sh@926 -- # '[' -z 88579 ']' 00:17:10.744 18:33:18 -- common/autotest_common.sh@930 -- # kill -0 88579 00:17:10.744 18:33:18 -- common/autotest_common.sh@931 -- # uname 00:17:10.744 18:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.744 18:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88579 00:17:10.744 killing process with pid 88579 00:17:10.744 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.744 00:17:10.744 Latency(us) 00:17:10.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.744 =================================================================================================================== 00:17:10.744 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.744 18:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:10.744 18:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:10.744 18:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88579' 00:17:10.744 18:33:18 -- common/autotest_common.sh@945 -- # kill 88579 00:17:10.744 18:33:18 -- common/autotest_common.sh@950 -- # wait 88579 00:17:11.003 18:33:18 -- target/tls.sh@37 -- # return 1 00:17:11.003 18:33:18 -- common/autotest_common.sh@643 -- # es=1 00:17:11.003 18:33:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:11.003 18:33:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:11.003 18:33:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:11.003 18:33:18 -- target/tls.sh@167 -- # killprocess 87928 00:17:11.003 18:33:18 -- common/autotest_common.sh@926 -- # '[' -z 87928 ']' 00:17:11.003 18:33:18 -- common/autotest_common.sh@930 -- # kill -0 87928 00:17:11.003 18:33:18 -- common/autotest_common.sh@931 -- # uname 00:17:11.004 18:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.004 18:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87928 00:17:11.004 killing process with pid 87928 00:17:11.004 18:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:11.004 18:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:11.004 18:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87928' 00:17:11.004 18:33:18 -- common/autotest_common.sh@945 -- # kill 87928 00:17:11.004 18:33:18 -- common/autotest_common.sh@950 -- # wait 87928 00:17:11.262 18:33:18 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:11.262 18:33:18 -- target/tls.sh@49 -- # local key hash crc 00:17:11.262 18:33:18 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:11.262 18:33:18 -- target/tls.sh@51 -- # hash=02 00:17:11.262 18:33:18 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:11.262 18:33:18 -- target/tls.sh@52 -- # gzip -1 -c 00:17:11.262 18:33:18 -- target/tls.sh@52 -- # tail -c8 00:17:11.262 18:33:18 -- target/tls.sh@52 -- # head -c 4 00:17:11.262 18:33:18 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:11.262 18:33:18 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:11.262 18:33:18 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:11.262 18:33:18 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:11.262 18:33:18 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:11.262 18:33:18 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.263 18:33:18 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:11.263 18:33:18 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.263 18:33:18 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:11.263 18:33:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:11.263 18:33:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:11.263 18:33:18 -- common/autotest_common.sh@10 -- # set +x 00:17:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.263 18:33:18 -- nvmf/common.sh@469 -- # nvmfpid=88638 00:17:11.263 18:33:18 -- nvmf/common.sh@470 -- # waitforlisten 88638 00:17:11.263 18:33:18 -- common/autotest_common.sh@819 -- # '[' -z 88638 ']' 00:17:11.263 18:33:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.263 18:33:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:11.263 18:33:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.263 18:33:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.263 18:33:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.263 18:33:18 -- common/autotest_common.sh@10 -- # set +x 00:17:11.263 [2024-07-14 18:33:18.589525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:11.263 [2024-07-14 18:33:18.589611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.522 [2024-07-14 18:33:18.731712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.522 [2024-07-14 18:33:18.794168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:11.522 [2024-07-14 18:33:18.794335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.522 [2024-07-14 18:33:18.794346] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.522 [2024-07-14 18:33:18.794354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.522 [2024-07-14 18:33:18.794383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.091 18:33:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:12.091 18:33:19 -- common/autotest_common.sh@852 -- # return 0 00:17:12.091 18:33:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:12.091 18:33:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:12.091 18:33:19 -- common/autotest_common.sh@10 -- # set +x 00:17:12.349 18:33:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.349 18:33:19 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.349 18:33:19 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.349 18:33:19 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:12.349 [2024-07-14 18:33:19.735328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.349 18:33:19 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:12.607 18:33:19 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:12.865 [2024-07-14 18:33:20.147410] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:12.865 [2024-07-14 18:33:20.147710] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.865 18:33:20 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:13.124 malloc0 00:17:13.124 18:33:20 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:13.382 18:33:20 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.645 18:33:20 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.645 18:33:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.645 18:33:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.645 18:33:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.645 18:33:20 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:13.645 18:33:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.645 18:33:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.645 18:33:20 -- target/tls.sh@28 -- # bdevperf_pid=88736 00:17:13.645 18:33:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.645 18:33:20 -- target/tls.sh@31 -- # waitforlisten 88736 /var/tmp/bdevperf.sock 00:17:13.645 18:33:20 -- common/autotest_common.sh@819 -- # '[' -z 88736 ']' 00:17:13.645 18:33:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.645 18:33:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.645 18:33:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.645 18:33:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.645 18:33:20 -- common/autotest_common.sh@10 -- # set +x 00:17:13.645 [2024-07-14 18:33:20.871293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:13.645 [2024-07-14 18:33:20.871390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88736 ] 00:17:13.645 [2024-07-14 18:33:21.007615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.903 [2024-07-14 18:33:21.096185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.467 18:33:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.467 18:33:21 -- common/autotest_common.sh@852 -- # return 0 00:17:14.467 18:33:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.745 [2024-07-14 18:33:21.947611] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.745 TLSTESTn1 00:17:14.745 18:33:22 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:14.745 Running I/O for 10 seconds... 00:17:26.946 00:17:26.946 Latency(us) 00:17:26.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.946 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.946 Verification LBA range: start 0x0 length 0x2000 00:17:26.946 TLSTESTn1 : 10.02 6100.18 23.83 0.00 0.00 20947.06 4796.04 24903.68 00:17:26.946 =================================================================================================================== 00:17:26.946 Total : 6100.18 23.83 0.00 0.00 20947.06 4796.04 24903.68 00:17:26.946 0 00:17:26.946 18:33:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.946 18:33:32 -- target/tls.sh@45 -- # killprocess 88736 00:17:26.946 18:33:32 -- common/autotest_common.sh@926 -- # '[' -z 88736 ']' 00:17:26.946 18:33:32 -- common/autotest_common.sh@930 -- # kill -0 88736 00:17:26.946 18:33:32 -- common/autotest_common.sh@931 -- # uname 00:17:26.946 18:33:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.946 18:33:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88736 00:17:26.946 killing process with pid 88736 00:17:26.946 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.946 00:17:26.946 Latency(us) 00:17:26.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.946 =================================================================================================================== 00:17:26.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.946 18:33:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:26.946 18:33:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:26.946 18:33:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88736' 00:17:26.946 18:33:32 -- common/autotest_common.sh@945 -- # kill 88736 00:17:26.946 18:33:32 -- common/autotest_common.sh@950 -- # wait 88736 00:17:26.946 18:33:32 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.946 18:33:32 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.946 18:33:32 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.946 18:33:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.946 18:33:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:26.946 18:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.946 18:33:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:26.946 18:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.946 18:33:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.946 18:33:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.946 18:33:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.946 18:33:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.946 18:33:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:26.946 18:33:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.946 18:33:32 -- target/tls.sh@28 -- # bdevperf_pid=88889 00:17:26.946 18:33:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.946 18:33:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.946 18:33:32 -- target/tls.sh@31 -- # waitforlisten 88889 /var/tmp/bdevperf.sock 00:17:26.946 18:33:32 -- common/autotest_common.sh@819 -- # '[' -z 88889 ']' 00:17:26.946 18:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.946 18:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.946 18:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.946 18:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.946 18:33:32 -- common/autotest_common.sh@10 -- # set +x 00:17:26.946 [2024-07-14 18:33:32.473143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:26.946 [2024-07-14 18:33:32.473255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88889 ] 00:17:26.946 [2024-07-14 18:33:32.612993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.946 [2024-07-14 18:33:32.682237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.946 18:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.946 18:33:33 -- common/autotest_common.sh@852 -- # return 0 00:17:26.946 18:33:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.946 [2024-07-14 18:33:33.582627] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.946 [2024-07-14 18:33:33.582679] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:26.947 2024/07/14 18:33:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.947 request: 00:17:26.947 { 00:17:26.947 "method": "bdev_nvme_attach_controller", 00:17:26.947 "params": { 00:17:26.947 "name": "TLSTEST", 00:17:26.947 "trtype": "tcp", 00:17:26.947 "traddr": "10.0.0.2", 00:17:26.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.947 "adrfam": "ipv4", 00:17:26.947 "trsvcid": "4420", 00:17:26.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.947 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:26.947 } 00:17:26.947 } 00:17:26.947 Got JSON-RPC error response 00:17:26.947 GoRPCClient: error on JSON-RPC call 00:17:26.947 18:33:33 -- target/tls.sh@36 -- # killprocess 88889 00:17:26.947 18:33:33 -- common/autotest_common.sh@926 -- # '[' -z 88889 ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@930 -- # kill -0 88889 00:17:26.947 18:33:33 -- common/autotest_common.sh@931 -- # uname 00:17:26.947 18:33:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88889 00:17:26.947 killing process with pid 88889 00:17:26.947 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.947 00:17:26.947 Latency(us) 00:17:26.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.947 =================================================================================================================== 00:17:26.947 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.947 18:33:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:26.947 18:33:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88889' 00:17:26.947 18:33:33 -- common/autotest_common.sh@945 -- # kill 88889 00:17:26.947 18:33:33 -- common/autotest_common.sh@950 -- # wait 88889 00:17:26.947 18:33:33 -- target/tls.sh@37 -- # return 1 00:17:26.947 18:33:33 -- common/autotest_common.sh@643 -- # es=1 00:17:26.947 18:33:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:26.947 18:33:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:26.947 18:33:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:26.947 18:33:33 -- target/tls.sh@183 -- # killprocess 88638 00:17:26.947 18:33:33 -- common/autotest_common.sh@926 -- # '[' -z 88638 ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@930 -- # kill -0 88638 00:17:26.947 18:33:33 -- common/autotest_common.sh@931 -- # uname 00:17:26.947 18:33:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88638 00:17:26.947 killing process with pid 88638 00:17:26.947 18:33:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:26.947 18:33:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:26.947 18:33:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88638' 00:17:26.947 18:33:33 -- common/autotest_common.sh@945 -- # kill 88638 00:17:26.947 18:33:33 -- common/autotest_common.sh@950 -- # wait 88638 00:17:26.947 18:33:34 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:26.947 18:33:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:26.947 18:33:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:26.947 18:33:34 -- common/autotest_common.sh@10 -- # set +x 00:17:26.947 18:33:34 -- nvmf/common.sh@469 -- # nvmfpid=88934 00:17:26.947 18:33:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.947 18:33:34 -- nvmf/common.sh@470 -- # waitforlisten 88934 00:17:26.947 18:33:34 -- common/autotest_common.sh@819 -- # '[' -z 88934 ']' 00:17:26.947 18:33:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.947 18:33:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.947 18:33:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.947 18:33:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.947 18:33:34 -- common/autotest_common.sh@10 -- # set +x 00:17:26.947 [2024-07-14 18:33:34.109579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:26.947 [2024-07-14 18:33:34.109725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.947 [2024-07-14 18:33:34.252167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.947 [2024-07-14 18:33:34.320126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:26.947 [2024-07-14 18:33:34.320290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.947 [2024-07-14 18:33:34.320302] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.947 [2024-07-14 18:33:34.320310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.947 [2024-07-14 18:33:34.320343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.883 18:33:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.884 18:33:35 -- common/autotest_common.sh@852 -- # return 0 00:17:27.884 18:33:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:27.884 18:33:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:27.884 18:33:35 -- common/autotest_common.sh@10 -- # set +x 00:17:27.884 18:33:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.884 18:33:35 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.884 18:33:35 -- common/autotest_common.sh@640 -- # local es=0 00:17:27.884 18:33:35 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.884 18:33:35 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:27.884 18:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.884 18:33:35 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:27.884 18:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.884 18:33:35 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.884 18:33:35 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.884 18:33:35 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:27.884 [2024-07-14 18:33:35.278436] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.884 18:33:35 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:28.142 18:33:35 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:28.401 [2024-07-14 18:33:35.690580] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:28.401 [2024-07-14 18:33:35.690782] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.401 18:33:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:28.660 malloc0 00:17:28.660 18:33:35 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:28.919 18:33:36 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.919 [2024-07-14 18:33:36.318557] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:28.919 [2024-07-14 18:33:36.318591] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:28.919 [2024-07-14 18:33:36.318626] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:28.919 2024/07/14 18:33:36 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:28.919 request: 00:17:28.919 { 00:17:28.919 "method": "nvmf_subsystem_add_host", 00:17:28.919 "params": { 00:17:28.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.919 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.919 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:28.919 } 00:17:28.919 } 00:17:28.919 Got JSON-RPC error response 00:17:28.919 GoRPCClient: error on JSON-RPC call 00:17:28.919 18:33:36 -- common/autotest_common.sh@643 -- # es=1 00:17:28.919 18:33:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:28.919 18:33:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:28.919 18:33:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:28.919 18:33:36 -- target/tls.sh@189 -- # killprocess 88934 00:17:28.919 18:33:36 -- common/autotest_common.sh@926 -- # '[' -z 88934 ']' 00:17:28.919 18:33:36 -- common/autotest_common.sh@930 -- # kill -0 88934 00:17:28.919 18:33:36 -- common/autotest_common.sh@931 -- # uname 00:17:29.177 18:33:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.177 18:33:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88934 00:17:29.177 18:33:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:29.177 killing process with pid 88934 00:17:29.177 18:33:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:29.177 18:33:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88934' 00:17:29.177 18:33:36 -- common/autotest_common.sh@945 -- # kill 88934 00:17:29.177 18:33:36 -- common/autotest_common.sh@950 -- # wait 88934 00:17:29.177 18:33:36 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:29.177 18:33:36 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:29.177 18:33:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.177 18:33:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.177 18:33:36 -- common/autotest_common.sh@10 -- # set +x 00:17:29.177 18:33:36 -- nvmf/common.sh@469 -- # nvmfpid=89045 00:17:29.177 18:33:36 -- nvmf/common.sh@470 -- # waitforlisten 89045 00:17:29.177 18:33:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.177 18:33:36 -- common/autotest_common.sh@819 -- # '[' -z 89045 ']' 00:17:29.177 18:33:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.177 18:33:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.177 18:33:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.177 18:33:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.177 18:33:36 -- common/autotest_common.sh@10 -- # set +x 00:17:29.436 [2024-07-14 18:33:36.631437] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:29.436 [2024-07-14 18:33:36.631561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.436 [2024-07-14 18:33:36.771654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.436 [2024-07-14 18:33:36.851372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.436 [2024-07-14 18:33:36.851534] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.436 [2024-07-14 18:33:36.851565] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.436 [2024-07-14 18:33:36.851573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.436 [2024-07-14 18:33:36.851603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.372 18:33:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.372 18:33:37 -- common/autotest_common.sh@852 -- # return 0 00:17:30.372 18:33:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.372 18:33:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.372 18:33:37 -- common/autotest_common.sh@10 -- # set +x 00:17:30.372 18:33:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.372 18:33:37 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.372 18:33:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.372 18:33:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:30.372 [2024-07-14 18:33:37.778692] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.631 18:33:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:30.631 18:33:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:30.890 [2024-07-14 18:33:38.206893] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.890 [2024-07-14 18:33:38.207099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.890 18:33:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:31.148 malloc0 00:17:31.148 18:33:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:31.407 18:33:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.665 18:33:38 -- target/tls.sh@197 -- # bdevperf_pid=89142 00:17:31.665 18:33:38 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.665 18:33:38 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.665 18:33:38 -- target/tls.sh@200 -- # waitforlisten 89142 /var/tmp/bdevperf.sock 00:17:31.665 18:33:38 -- common/autotest_common.sh@819 -- # '[' -z 89142 ']' 00:17:31.665 18:33:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.665 18:33:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:31.665 18:33:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.665 18:33:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:31.665 18:33:38 -- common/autotest_common.sh@10 -- # set +x 00:17:31.665 [2024-07-14 18:33:38.895559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:31.665 [2024-07-14 18:33:38.895700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89142 ] 00:17:31.665 [2024-07-14 18:33:39.039314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.923 [2024-07-14 18:33:39.120385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.489 18:33:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.489 18:33:39 -- common/autotest_common.sh@852 -- # return 0 00:17:32.489 18:33:39 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.746 [2024-07-14 18:33:40.031560] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.746 TLSTESTn1 00:17:32.746 18:33:40 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:33.311 18:33:40 -- target/tls.sh@205 -- # tgtconf='{ 00:17:33.311 "subsystems": [ 00:17:33.311 { 00:17:33.311 "subsystem": "iobuf", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "iobuf_set_options", 00:17:33.311 "params": { 00:17:33.311 "large_bufsize": 135168, 00:17:33.311 "large_pool_count": 1024, 00:17:33.311 "small_bufsize": 8192, 00:17:33.311 "small_pool_count": 8192 00:17:33.311 } 00:17:33.311 } 00:17:33.311 ] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "sock", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "sock_impl_set_options", 00:17:33.311 "params": { 00:17:33.311 "enable_ktls": false, 00:17:33.311 "enable_placement_id": 0, 00:17:33.311 "enable_quickack": false, 00:17:33.311 "enable_recv_pipe": true, 00:17:33.311 "enable_zerocopy_send_client": false, 00:17:33.311 "enable_zerocopy_send_server": true, 00:17:33.311 "impl_name": "posix", 00:17:33.311 "recv_buf_size": 2097152, 00:17:33.311 "send_buf_size": 2097152, 00:17:33.311 "tls_version": 0, 00:17:33.311 "zerocopy_threshold": 0 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "sock_impl_set_options", 00:17:33.311 "params": { 00:17:33.311 "enable_ktls": false, 00:17:33.311 "enable_placement_id": 0, 00:17:33.311 "enable_quickack": false, 00:17:33.311 "enable_recv_pipe": true, 00:17:33.311 "enable_zerocopy_send_client": false, 00:17:33.311 "enable_zerocopy_send_server": true, 00:17:33.311 "impl_name": "ssl", 00:17:33.311 "recv_buf_size": 4096, 00:17:33.311 "send_buf_size": 4096, 00:17:33.311 "tls_version": 0, 00:17:33.311 "zerocopy_threshold": 0 00:17:33.311 } 00:17:33.311 } 00:17:33.311 ] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "vmd", 00:17:33.311 "config": [] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "accel", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "accel_set_options", 00:17:33.311 "params": { 00:17:33.311 "buf_count": 2048, 00:17:33.311 "large_cache_size": 16, 00:17:33.311 "sequence_count": 2048, 00:17:33.311 "small_cache_size": 128, 00:17:33.311 "task_count": 2048 00:17:33.311 } 00:17:33.311 } 00:17:33.311 ] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "bdev", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "bdev_set_options", 00:17:33.311 "params": { 00:17:33.311 "bdev_auto_examine": true, 00:17:33.311 "bdev_io_cache_size": 256, 00:17:33.311 "bdev_io_pool_size": 65535, 00:17:33.311 "iobuf_large_cache_size": 16, 00:17:33.311 "iobuf_small_cache_size": 128 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_raid_set_options", 00:17:33.311 "params": { 00:17:33.311 "process_window_size_kb": 1024 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_iscsi_set_options", 00:17:33.311 "params": { 00:17:33.311 "timeout_sec": 30 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_nvme_set_options", 00:17:33.311 "params": { 00:17:33.311 "action_on_timeout": "none", 00:17:33.311 "allow_accel_sequence": false, 00:17:33.311 "arbitration_burst": 0, 00:17:33.311 "bdev_retry_count": 3, 00:17:33.311 "ctrlr_loss_timeout_sec": 0, 00:17:33.311 "delay_cmd_submit": true, 00:17:33.311 "fast_io_fail_timeout_sec": 0, 00:17:33.311 "generate_uuids": false, 00:17:33.311 "high_priority_weight": 0, 00:17:33.311 "io_path_stat": false, 00:17:33.311 "io_queue_requests": 0, 00:17:33.311 "keep_alive_timeout_ms": 10000, 00:17:33.311 "low_priority_weight": 0, 00:17:33.311 "medium_priority_weight": 0, 00:17:33.311 "nvme_adminq_poll_period_us": 10000, 00:17:33.311 "nvme_ioq_poll_period_us": 0, 00:17:33.311 "reconnect_delay_sec": 0, 00:17:33.311 "timeout_admin_us": 0, 00:17:33.311 "timeout_us": 0, 00:17:33.311 "transport_ack_timeout": 0, 00:17:33.311 "transport_retry_count": 4, 00:17:33.311 "transport_tos": 0 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_nvme_set_hotplug", 00:17:33.311 "params": { 00:17:33.311 "enable": false, 00:17:33.311 "period_us": 100000 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_malloc_create", 00:17:33.311 "params": { 00:17:33.311 "block_size": 4096, 00:17:33.311 "name": "malloc0", 00:17:33.311 "num_blocks": 8192, 00:17:33.311 "optimal_io_boundary": 0, 00:17:33.311 "physical_block_size": 4096, 00:17:33.311 "uuid": "8b1528b3-cec3-4354-9c49-16372855e4d7" 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "bdev_wait_for_examine" 00:17:33.311 } 00:17:33.311 ] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "nbd", 00:17:33.311 "config": [] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "scheduler", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "framework_set_scheduler", 00:17:33.311 "params": { 00:17:33.311 "name": "static" 00:17:33.311 } 00:17:33.311 } 00:17:33.311 ] 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "subsystem": "nvmf", 00:17:33.311 "config": [ 00:17:33.311 { 00:17:33.311 "method": "nvmf_set_config", 00:17:33.311 "params": { 00:17:33.311 "admin_cmd_passthru": { 00:17:33.311 "identify_ctrlr": false 00:17:33.311 }, 00:17:33.311 "discovery_filter": "match_any" 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "nvmf_set_max_subsystems", 00:17:33.311 "params": { 00:17:33.311 "max_subsystems": 1024 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "nvmf_set_crdt", 00:17:33.311 "params": { 00:17:33.311 "crdt1": 0, 00:17:33.311 "crdt2": 0, 00:17:33.311 "crdt3": 0 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "nvmf_create_transport", 00:17:33.311 "params": { 00:17:33.311 "abort_timeout_sec": 1, 00:17:33.311 "buf_cache_size": 4294967295, 00:17:33.311 "c2h_success": false, 00:17:33.311 "dif_insert_or_strip": false, 00:17:33.311 "in_capsule_data_size": 4096, 00:17:33.311 "io_unit_size": 131072, 00:17:33.311 "max_aq_depth": 128, 00:17:33.311 "max_io_qpairs_per_ctrlr": 127, 00:17:33.311 "max_io_size": 131072, 00:17:33.311 "max_queue_depth": 128, 00:17:33.311 "num_shared_buffers": 511, 00:17:33.311 "sock_priority": 0, 00:17:33.311 "trtype": "TCP", 00:17:33.311 "zcopy": false 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "nvmf_create_subsystem", 00:17:33.311 "params": { 00:17:33.311 "allow_any_host": false, 00:17:33.311 "ana_reporting": false, 00:17:33.311 "max_cntlid": 65519, 00:17:33.311 "max_namespaces": 10, 00:17:33.311 "min_cntlid": 1, 00:17:33.311 "model_number": "SPDK bdev Controller", 00:17:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.311 "serial_number": "SPDK00000000000001" 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.311 "method": "nvmf_subsystem_add_host", 00:17:33.311 "params": { 00:17:33.311 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.311 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:33.311 } 00:17:33.311 }, 00:17:33.311 { 00:17:33.312 "method": "nvmf_subsystem_add_ns", 00:17:33.312 "params": { 00:17:33.312 "namespace": { 00:17:33.312 "bdev_name": "malloc0", 00:17:33.312 "nguid": "8B1528B3CEC343549C4916372855E4D7", 00:17:33.312 "nsid": 1, 00:17:33.312 "uuid": "8b1528b3-cec3-4354-9c49-16372855e4d7" 00:17:33.312 }, 00:17:33.312 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:33.312 } 00:17:33.312 }, 00:17:33.312 { 00:17:33.312 "method": "nvmf_subsystem_add_listener", 00:17:33.312 "params": { 00:17:33.312 "listen_address": { 00:17:33.312 "adrfam": "IPv4", 00:17:33.312 "traddr": "10.0.0.2", 00:17:33.312 "trsvcid": "4420", 00:17:33.312 "trtype": "TCP" 00:17:33.312 }, 00:17:33.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.312 "secure_channel": true 00:17:33.312 } 00:17:33.312 } 00:17:33.312 ] 00:17:33.312 } 00:17:33.312 ] 00:17:33.312 }' 00:17:33.312 18:33:40 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:33.569 18:33:40 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:33.569 "subsystems": [ 00:17:33.569 { 00:17:33.569 "subsystem": "iobuf", 00:17:33.569 "config": [ 00:17:33.569 { 00:17:33.569 "method": "iobuf_set_options", 00:17:33.569 "params": { 00:17:33.569 "large_bufsize": 135168, 00:17:33.569 "large_pool_count": 1024, 00:17:33.569 "small_bufsize": 8192, 00:17:33.569 "small_pool_count": 8192 00:17:33.569 } 00:17:33.569 } 00:17:33.569 ] 00:17:33.569 }, 00:17:33.569 { 00:17:33.569 "subsystem": "sock", 00:17:33.569 "config": [ 00:17:33.569 { 00:17:33.569 "method": "sock_impl_set_options", 00:17:33.569 "params": { 00:17:33.569 "enable_ktls": false, 00:17:33.569 "enable_placement_id": 0, 00:17:33.569 "enable_quickack": false, 00:17:33.569 "enable_recv_pipe": true, 00:17:33.569 "enable_zerocopy_send_client": false, 00:17:33.569 "enable_zerocopy_send_server": true, 00:17:33.569 "impl_name": "posix", 00:17:33.569 "recv_buf_size": 2097152, 00:17:33.569 "send_buf_size": 2097152, 00:17:33.569 "tls_version": 0, 00:17:33.569 "zerocopy_threshold": 0 00:17:33.569 } 00:17:33.569 }, 00:17:33.569 { 00:17:33.569 "method": "sock_impl_set_options", 00:17:33.569 "params": { 00:17:33.569 "enable_ktls": false, 00:17:33.569 "enable_placement_id": 0, 00:17:33.569 "enable_quickack": false, 00:17:33.569 "enable_recv_pipe": true, 00:17:33.569 "enable_zerocopy_send_client": false, 00:17:33.569 "enable_zerocopy_send_server": true, 00:17:33.569 "impl_name": "ssl", 00:17:33.569 "recv_buf_size": 4096, 00:17:33.569 "send_buf_size": 4096, 00:17:33.569 "tls_version": 0, 00:17:33.569 "zerocopy_threshold": 0 00:17:33.569 } 00:17:33.569 } 00:17:33.569 ] 00:17:33.569 }, 00:17:33.569 { 00:17:33.569 "subsystem": "vmd", 00:17:33.569 "config": [] 00:17:33.569 }, 00:17:33.569 { 00:17:33.569 "subsystem": "accel", 00:17:33.569 "config": [ 00:17:33.569 { 00:17:33.569 "method": "accel_set_options", 00:17:33.570 "params": { 00:17:33.570 "buf_count": 2048, 00:17:33.570 "large_cache_size": 16, 00:17:33.570 "sequence_count": 2048, 00:17:33.570 "small_cache_size": 128, 00:17:33.570 "task_count": 2048 00:17:33.570 } 00:17:33.570 } 00:17:33.570 ] 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "subsystem": "bdev", 00:17:33.570 "config": [ 00:17:33.570 { 00:17:33.570 "method": "bdev_set_options", 00:17:33.570 "params": { 00:17:33.570 "bdev_auto_examine": true, 00:17:33.570 "bdev_io_cache_size": 256, 00:17:33.570 "bdev_io_pool_size": 65535, 00:17:33.570 "iobuf_large_cache_size": 16, 00:17:33.570 "iobuf_small_cache_size": 128 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_raid_set_options", 00:17:33.570 "params": { 00:17:33.570 "process_window_size_kb": 1024 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_iscsi_set_options", 00:17:33.570 "params": { 00:17:33.570 "timeout_sec": 30 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_nvme_set_options", 00:17:33.570 "params": { 00:17:33.570 "action_on_timeout": "none", 00:17:33.570 "allow_accel_sequence": false, 00:17:33.570 "arbitration_burst": 0, 00:17:33.570 "bdev_retry_count": 3, 00:17:33.570 "ctrlr_loss_timeout_sec": 0, 00:17:33.570 "delay_cmd_submit": true, 00:17:33.570 "fast_io_fail_timeout_sec": 0, 00:17:33.570 "generate_uuids": false, 00:17:33.570 "high_priority_weight": 0, 00:17:33.570 "io_path_stat": false, 00:17:33.570 "io_queue_requests": 512, 00:17:33.570 "keep_alive_timeout_ms": 10000, 00:17:33.570 "low_priority_weight": 0, 00:17:33.570 "medium_priority_weight": 0, 00:17:33.570 "nvme_adminq_poll_period_us": 10000, 00:17:33.570 "nvme_ioq_poll_period_us": 0, 00:17:33.570 "reconnect_delay_sec": 0, 00:17:33.570 "timeout_admin_us": 0, 00:17:33.570 "timeout_us": 0, 00:17:33.570 "transport_ack_timeout": 0, 00:17:33.570 "transport_retry_count": 4, 00:17:33.570 "transport_tos": 0 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_nvme_attach_controller", 00:17:33.570 "params": { 00:17:33.570 "adrfam": "IPv4", 00:17:33.570 "ctrlr_loss_timeout_sec": 0, 00:17:33.570 "ddgst": false, 00:17:33.570 "fast_io_fail_timeout_sec": 0, 00:17:33.570 "hdgst": false, 00:17:33.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.570 "name": "TLSTEST", 00:17:33.570 "prchk_guard": false, 00:17:33.570 "prchk_reftag": false, 00:17:33.570 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:33.570 "reconnect_delay_sec": 0, 00:17:33.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.570 "traddr": "10.0.0.2", 00:17:33.570 "trsvcid": "4420", 00:17:33.570 "trtype": "TCP" 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_nvme_set_hotplug", 00:17:33.570 "params": { 00:17:33.570 "enable": false, 00:17:33.570 "period_us": 100000 00:17:33.570 } 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "method": "bdev_wait_for_examine" 00:17:33.570 } 00:17:33.570 ] 00:17:33.570 }, 00:17:33.570 { 00:17:33.570 "subsystem": "nbd", 00:17:33.570 "config": [] 00:17:33.570 } 00:17:33.570 ] 00:17:33.570 }' 00:17:33.570 18:33:40 -- target/tls.sh@208 -- # killprocess 89142 00:17:33.570 18:33:40 -- common/autotest_common.sh@926 -- # '[' -z 89142 ']' 00:17:33.570 18:33:40 -- common/autotest_common.sh@930 -- # kill -0 89142 00:17:33.570 18:33:40 -- common/autotest_common.sh@931 -- # uname 00:17:33.570 18:33:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.570 18:33:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89142 00:17:33.570 18:33:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:33.570 18:33:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:33.570 killing process with pid 89142 00:17:33.570 18:33:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89142' 00:17:33.570 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.570 00:17:33.570 Latency(us) 00:17:33.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.570 =================================================================================================================== 00:17:33.570 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.570 18:33:40 -- common/autotest_common.sh@945 -- # kill 89142 00:17:33.570 18:33:40 -- common/autotest_common.sh@950 -- # wait 89142 00:17:33.570 18:33:40 -- target/tls.sh@209 -- # killprocess 89045 00:17:33.570 18:33:40 -- common/autotest_common.sh@926 -- # '[' -z 89045 ']' 00:17:33.570 18:33:40 -- common/autotest_common.sh@930 -- # kill -0 89045 00:17:33.570 18:33:40 -- common/autotest_common.sh@931 -- # uname 00:17:33.570 18:33:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.570 18:33:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89045 00:17:33.570 18:33:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:33.570 18:33:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:33.828 18:33:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89045' 00:17:33.828 killing process with pid 89045 00:17:33.828 18:33:40 -- common/autotest_common.sh@945 -- # kill 89045 00:17:33.828 18:33:40 -- common/autotest_common.sh@950 -- # wait 89045 00:17:34.087 18:33:41 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:34.087 18:33:41 -- target/tls.sh@212 -- # echo '{ 00:17:34.087 "subsystems": [ 00:17:34.087 { 00:17:34.087 "subsystem": "iobuf", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "iobuf_set_options", 00:17:34.087 "params": { 00:17:34.087 "large_bufsize": 135168, 00:17:34.087 "large_pool_count": 1024, 00:17:34.087 "small_bufsize": 8192, 00:17:34.087 "small_pool_count": 8192 00:17:34.087 } 00:17:34.087 } 00:17:34.087 ] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "sock", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "sock_impl_set_options", 00:17:34.087 "params": { 00:17:34.087 "enable_ktls": false, 00:17:34.087 "enable_placement_id": 0, 00:17:34.087 "enable_quickack": false, 00:17:34.087 "enable_recv_pipe": true, 00:17:34.087 "enable_zerocopy_send_client": false, 00:17:34.087 "enable_zerocopy_send_server": true, 00:17:34.087 "impl_name": "posix", 00:17:34.087 "recv_buf_size": 2097152, 00:17:34.087 "send_buf_size": 2097152, 00:17:34.087 "tls_version": 0, 00:17:34.087 "zerocopy_threshold": 0 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "sock_impl_set_options", 00:17:34.087 "params": { 00:17:34.087 "enable_ktls": false, 00:17:34.087 "enable_placement_id": 0, 00:17:34.087 "enable_quickack": false, 00:17:34.087 "enable_recv_pipe": true, 00:17:34.087 "enable_zerocopy_send_client": false, 00:17:34.087 "enable_zerocopy_send_server": true, 00:17:34.087 "impl_name": "ssl", 00:17:34.087 "recv_buf_size": 4096, 00:17:34.087 "send_buf_size": 4096, 00:17:34.087 "tls_version": 0, 00:17:34.087 "zerocopy_threshold": 0 00:17:34.087 } 00:17:34.087 } 00:17:34.087 ] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "vmd", 00:17:34.087 "config": [] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "accel", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "accel_set_options", 00:17:34.087 "params": { 00:17:34.087 "buf_count": 2048, 00:17:34.087 "large_cache_size": 16, 00:17:34.087 "sequence_count": 2048, 00:17:34.087 "small_cache_size": 128, 00:17:34.087 "task_count": 2048 00:17:34.087 } 00:17:34.087 } 00:17:34.087 ] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "bdev", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "bdev_set_options", 00:17:34.087 "params": { 00:17:34.087 "bdev_auto_examine": true, 00:17:34.087 "bdev_io_cache_size": 256, 00:17:34.087 "bdev_io_pool_size": 65535, 00:17:34.087 "iobuf_large_cache_size": 16, 00:17:34.087 "iobuf_small_cache_size": 128 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_raid_set_options", 00:17:34.087 "params": { 00:17:34.087 "process_window_size_kb": 1024 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_iscsi_set_options", 00:17:34.087 "params": { 00:17:34.087 "timeout_sec": 30 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_nvme_set_options", 00:17:34.087 "params": { 00:17:34.087 "action_on_timeout": "none", 00:17:34.087 "allow_accel_sequence": false, 00:17:34.087 "arbitration_burst": 0, 00:17:34.087 "bdev_retry_count": 3, 00:17:34.087 "ctrlr_loss_timeout_sec": 0, 00:17:34.087 "delay_cmd_submit": true, 00:17:34.087 "fast_io_fail_timeout_sec": 0, 00:17:34.087 "generate_uuids": false, 00:17:34.087 "high_priority_weight": 0, 00:17:34.087 "io_path_stat": false, 00:17:34.087 "io_queue_requests": 0, 00:17:34.087 "keep_alive_timeout_ms": 10000, 00:17:34.087 "low_priority_weight": 0, 00:17:34.087 "medium_priority_weight": 0, 00:17:34.087 "nvme_adminq_poll_period_us": 10000, 00:17:34.087 "nvme_ioq_poll_period_us": 0, 00:17:34.087 "reconnect_delay_sec": 0, 00:17:34.087 "timeout_admin_us": 0, 00:17:34.087 "timeout_us": 0, 00:17:34.087 "transport_ack_timeout": 0, 00:17:34.087 "transport_retry_count": 4, 00:17:34.087 "transport_tos": 0 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_nvme_set_hotplug", 00:17:34.087 "params": { 00:17:34.087 "enable": false, 00:17:34.087 "period_us": 100000 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_malloc_create", 00:17:34.087 "params": { 00:17:34.087 "block_size": 4096, 00:17:34.087 "name": "malloc0", 00:17:34.087 "num_blocks": 8192, 00:17:34.087 "optimal_io_boundary": 0, 00:17:34.087 "physical_block_size": 4096, 00:17:34.087 "uuid": "8b1528b3-cec3-4354-9c49-16372855e4d7" 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "bdev_wait_for_examine" 00:17:34.087 } 00:17:34.087 ] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "nbd", 00:17:34.087 "config": [] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "scheduler", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "framework_set_scheduler", 00:17:34.087 "params": { 00:17:34.087 "name": "static" 00:17:34.087 } 00:17:34.087 } 00:17:34.087 ] 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "subsystem": "nvmf", 00:17:34.087 "config": [ 00:17:34.087 { 00:17:34.087 "method": "nvmf_set_config", 00:17:34.087 "params": { 00:17:34.087 "admin_cmd_passthru": { 00:17:34.087 "identify_ctrlr": false 00:17:34.087 }, 00:17:34.087 "discovery_filter": "match_any" 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "nvmf_set_max_subsystems", 00:17:34.087 "params": { 00:17:34.087 "max_subsystems": 1024 00:17:34.087 } 00:17:34.087 }, 00:17:34.087 { 00:17:34.087 "method": "nvmf_set_crdt", 00:17:34.087 "params": { 00:17:34.087 "crdt1": 0, 00:17:34.087 "crdt2": 0, 00:17:34.087 "crdt3": 0 00:17:34.088 } 00:17:34.088 }, 00:17:34.088 { 00:17:34.088 "method": "nvmf_create_transport", 00:17:34.088 "params": { 00:17:34.088 "abort_timeout_sec": 1, 00:17:34.088 "buf_cache_size": 4294967295, 00:17:34.088 "c2h_success": false, 00:17:34.088 "dif_insert_or_strip": false, 00:17:34.088 "in_capsule_data_size": 4096, 00:17:34.088 "io_unit_size": 131072, 00:17:34.088 "max_aq_depth": 128, 00:17:34.088 "max_io_qpairs_per_ctrlr": 127, 00:17:34.088 "max_io_size": 131072, 00:17:34.088 "max_queue_depth": 128, 00:17:34.088 "num_shared_buffers": 511, 00:17:34.088 "sock_priority": 0, 00:17:34.088 "trtype": "TCP", 00:17:34.088 "zcopy": false 00:17:34.088 } 00:17:34.088 }, 00:17:34.088 { 00:17:34.088 "method": "nvmf_create_subsystem", 00:17:34.088 "params": { 00:17:34.088 "allow_any_host": false, 00:17:34.088 "ana_reporting": false, 00:17:34.088 "max_cntlid": 65519, 00:17:34.088 "max_namespaces": 10, 00:17:34.088 "min_cntlid": 1, 00:17:34.088 "model_number": "SPDK bdev Controller", 00:17:34.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.088 "serial_number": "SPDK00000000000001" 00:17:34.088 } 00:17:34.088 }, 00:17:34.088 { 00:17:34.088 "method": "nvmf_subsystem_add_host", 00:17:34.088 "params": { 00:17:34.088 "host": "nqn.2016-06.io.spdk:host1", 00:17:34.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.088 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:34.088 } 00:17:34.088 }, 00:17:34.088 { 00:17:34.088 "method": "nvmf_subsystem_add_ns", 00:17:34.088 "params": { 00:17:34.088 "namespace": { 00:17:34.088 "bdev_name": "malloc0", 00:17:34.088 "nguid": "8B1528B3CEC343549C4916372855E4D7", 00:17:34.088 "nsid": 1, 00:17:34.088 "uuid": "8b1528b3-cec3-4354-9c49-16372855e4d7" 00:17:34.088 }, 00:17:34.088 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:34.088 } 00:17:34.088 }, 00:17:34.088 { 00:17:34.088 "method": "nvmf_subsystem_add_listener", 00:17:34.088 "params": { 00:17:34.088 "listen_address": { 00:17:34.088 "adrfam": "IPv4", 00:17:34.088 "traddr": "10.0.0.2", 00:17:34.088 "trsvcid": "4420", 00:17:34.088 "trtype": "TCP" 00:17:34.088 }, 00:17:34.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.088 "secure_channel": true 00:17:34.088 } 00:17:34.088 } 00:17:34.088 ] 00:17:34.088 } 00:17:34.088 ] 00:17:34.088 }' 00:17:34.088 18:33:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.088 18:33:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:34.088 18:33:41 -- common/autotest_common.sh@10 -- # set +x 00:17:34.088 18:33:41 -- nvmf/common.sh@469 -- # nvmfpid=89215 00:17:34.088 18:33:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:34.088 18:33:41 -- nvmf/common.sh@470 -- # waitforlisten 89215 00:17:34.088 18:33:41 -- common/autotest_common.sh@819 -- # '[' -z 89215 ']' 00:17:34.088 18:33:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.088 18:33:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.088 18:33:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.088 18:33:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.088 18:33:41 -- common/autotest_common.sh@10 -- # set +x 00:17:34.088 [2024-07-14 18:33:41.330301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:34.088 [2024-07-14 18:33:41.330383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.088 [2024-07-14 18:33:41.462307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.346 [2024-07-14 18:33:41.559217] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.346 [2024-07-14 18:33:41.559388] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.346 [2024-07-14 18:33:41.559402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.346 [2024-07-14 18:33:41.559411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.346 [2024-07-14 18:33:41.559449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.604 [2024-07-14 18:33:41.810589] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.604 [2024-07-14 18:33:41.842494] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.605 [2024-07-14 18:33:41.842794] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.864 18:33:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.864 18:33:42 -- common/autotest_common.sh@852 -- # return 0 00:17:34.864 18:33:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.864 18:33:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.864 18:33:42 -- common/autotest_common.sh@10 -- # set +x 00:17:35.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.145 18:33:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.145 18:33:42 -- target/tls.sh@216 -- # bdevperf_pid=89259 00:17:35.145 18:33:42 -- target/tls.sh@217 -- # waitforlisten 89259 /var/tmp/bdevperf.sock 00:17:35.145 18:33:42 -- common/autotest_common.sh@819 -- # '[' -z 89259 ']' 00:17:35.145 18:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.145 18:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.145 18:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.145 18:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.145 18:33:42 -- common/autotest_common.sh@10 -- # set +x 00:17:35.145 18:33:42 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:35.145 18:33:42 -- target/tls.sh@213 -- # echo '{ 00:17:35.145 "subsystems": [ 00:17:35.145 { 00:17:35.145 "subsystem": "iobuf", 00:17:35.145 "config": [ 00:17:35.145 { 00:17:35.145 "method": "iobuf_set_options", 00:17:35.145 "params": { 00:17:35.145 "large_bufsize": 135168, 00:17:35.145 "large_pool_count": 1024, 00:17:35.145 "small_bufsize": 8192, 00:17:35.145 "small_pool_count": 8192 00:17:35.145 } 00:17:35.145 } 00:17:35.145 ] 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "subsystem": "sock", 00:17:35.145 "config": [ 00:17:35.145 { 00:17:35.145 "method": "sock_impl_set_options", 00:17:35.145 "params": { 00:17:35.145 "enable_ktls": false, 00:17:35.145 "enable_placement_id": 0, 00:17:35.145 "enable_quickack": false, 00:17:35.145 "enable_recv_pipe": true, 00:17:35.145 "enable_zerocopy_send_client": false, 00:17:35.145 "enable_zerocopy_send_server": true, 00:17:35.145 "impl_name": "posix", 00:17:35.145 "recv_buf_size": 2097152, 00:17:35.145 "send_buf_size": 2097152, 00:17:35.145 "tls_version": 0, 00:17:35.145 "zerocopy_threshold": 0 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "sock_impl_set_options", 00:17:35.145 "params": { 00:17:35.145 "enable_ktls": false, 00:17:35.145 "enable_placement_id": 0, 00:17:35.145 "enable_quickack": false, 00:17:35.145 "enable_recv_pipe": true, 00:17:35.145 "enable_zerocopy_send_client": false, 00:17:35.145 "enable_zerocopy_send_server": true, 00:17:35.145 "impl_name": "ssl", 00:17:35.145 "recv_buf_size": 4096, 00:17:35.145 "send_buf_size": 4096, 00:17:35.145 "tls_version": 0, 00:17:35.145 "zerocopy_threshold": 0 00:17:35.145 } 00:17:35.145 } 00:17:35.145 ] 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "subsystem": "vmd", 00:17:35.145 "config": [] 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "subsystem": "accel", 00:17:35.145 "config": [ 00:17:35.145 { 00:17:35.145 "method": "accel_set_options", 00:17:35.145 "params": { 00:17:35.145 "buf_count": 2048, 00:17:35.145 "large_cache_size": 16, 00:17:35.145 "sequence_count": 2048, 00:17:35.145 "small_cache_size": 128, 00:17:35.145 "task_count": 2048 00:17:35.145 } 00:17:35.145 } 00:17:35.145 ] 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "subsystem": "bdev", 00:17:35.145 "config": [ 00:17:35.145 { 00:17:35.145 "method": "bdev_set_options", 00:17:35.145 "params": { 00:17:35.145 "bdev_auto_examine": true, 00:17:35.145 "bdev_io_cache_size": 256, 00:17:35.145 "bdev_io_pool_size": 65535, 00:17:35.145 "iobuf_large_cache_size": 16, 00:17:35.145 "iobuf_small_cache_size": 128 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_raid_set_options", 00:17:35.145 "params": { 00:17:35.145 "process_window_size_kb": 1024 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_iscsi_set_options", 00:17:35.145 "params": { 00:17:35.145 "timeout_sec": 30 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_nvme_set_options", 00:17:35.145 "params": { 00:17:35.145 "action_on_timeout": "none", 00:17:35.145 "allow_accel_sequence": false, 00:17:35.145 "arbitration_burst": 0, 00:17:35.145 "bdev_retry_count": 3, 00:17:35.145 "ctrlr_loss_timeout_sec": 0, 00:17:35.145 "delay_cmd_submit": true, 00:17:35.145 "fast_io_fail_timeout_sec": 0, 00:17:35.145 "generate_uuids": false, 00:17:35.145 "high_priority_weight": 0, 00:17:35.145 "io_path_stat": false, 00:17:35.145 "io_queue_requests": 512, 00:17:35.145 "keep_alive_timeout_ms": 10000, 00:17:35.145 "low_priority_weight": 0, 00:17:35.145 "medium_priority_weight": 0, 00:17:35.145 "nvme_adminq_poll_period_us": 10000, 00:17:35.145 "nvme_ioq_poll_period_us": 0, 00:17:35.145 "reconnect_delay_sec": 0, 00:17:35.145 "timeout_admin_us": 0, 00:17:35.145 "timeout_us": 0, 00:17:35.145 "transport_ack_timeout": 0, 00:17:35.145 "transport_retry_count": 4, 00:17:35.145 "transport_tos": 0 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_nvme_attach_controller", 00:17:35.145 "params": { 00:17:35.145 "adrfam": "IPv4", 00:17:35.145 "ctrlr_loss_timeout_sec": 0, 00:17:35.145 "ddgst": false, 00:17:35.145 "fast_io_fail_timeout_sec": 0, 00:17:35.145 "hdgst": false, 00:17:35.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.145 "name": "TLSTEST", 00:17:35.145 "prchk_guard": false, 00:17:35.145 "prchk_reftag": false, 00:17:35.145 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:35.145 "reconnect_delay_sec": 0, 00:17:35.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.145 "traddr": "10.0.0.2", 00:17:35.145 "trsvcid": "4420", 00:17:35.145 "trtype": "TCP" 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_nvme_set_hotplug", 00:17:35.145 "params": { 00:17:35.145 "enable": false, 00:17:35.145 "period_us": 100000 00:17:35.145 } 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "method": "bdev_wait_for_examine" 00:17:35.145 } 00:17:35.145 ] 00:17:35.145 }, 00:17:35.145 { 00:17:35.145 "subsystem": "nbd", 00:17:35.145 "config": [] 00:17:35.145 } 00:17:35.145 ] 00:17:35.145 }' 00:17:35.145 [2024-07-14 18:33:42.373456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:35.145 [2024-07-14 18:33:42.373579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89259 ] 00:17:35.145 [2024-07-14 18:33:42.504291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.415 [2024-07-14 18:33:42.577157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.415 [2024-07-14 18:33:42.727999] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.989 18:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.989 18:33:43 -- common/autotest_common.sh@852 -- # return 0 00:17:35.989 18:33:43 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:35.989 Running I/O for 10 seconds... 00:17:45.979 00:17:45.979 Latency(us) 00:17:45.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.979 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:45.979 Verification LBA range: start 0x0 length 0x2000 00:17:45.979 TLSTESTn1 : 10.02 5304.97 20.72 0.00 0.00 24084.73 5123.72 28240.06 00:17:45.979 =================================================================================================================== 00:17:45.980 Total : 5304.97 20.72 0.00 0.00 24084.73 5123.72 28240.06 00:17:45.980 0 00:17:46.261 18:33:53 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.261 18:33:53 -- target/tls.sh@223 -- # killprocess 89259 00:17:46.261 18:33:53 -- common/autotest_common.sh@926 -- # '[' -z 89259 ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@930 -- # kill -0 89259 00:17:46.261 18:33:53 -- common/autotest_common.sh@931 -- # uname 00:17:46.261 18:33:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89259 00:17:46.261 killing process with pid 89259 00:17:46.261 Received shutdown signal, test time was about 10.000000 seconds 00:17:46.261 00:17:46.261 Latency(us) 00:17:46.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.261 =================================================================================================================== 00:17:46.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.261 18:33:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:46.261 18:33:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89259' 00:17:46.261 18:33:53 -- common/autotest_common.sh@945 -- # kill 89259 00:17:46.261 18:33:53 -- common/autotest_common.sh@950 -- # wait 89259 00:17:46.261 18:33:53 -- target/tls.sh@224 -- # killprocess 89215 00:17:46.261 18:33:53 -- common/autotest_common.sh@926 -- # '[' -z 89215 ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@930 -- # kill -0 89215 00:17:46.261 18:33:53 -- common/autotest_common.sh@931 -- # uname 00:17:46.261 18:33:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89215 00:17:46.261 killing process with pid 89215 00:17:46.261 18:33:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:46.261 18:33:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:46.261 18:33:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89215' 00:17:46.261 18:33:53 -- common/autotest_common.sh@945 -- # kill 89215 00:17:46.261 18:33:53 -- common/autotest_common.sh@950 -- # wait 89215 00:17:46.520 18:33:53 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:46.520 18:33:53 -- target/tls.sh@227 -- # cleanup 00:17:46.520 18:33:53 -- target/tls.sh@15 -- # process_shm --id 0 00:17:46.520 18:33:53 -- common/autotest_common.sh@796 -- # type=--id 00:17:46.520 18:33:53 -- common/autotest_common.sh@797 -- # id=0 00:17:46.520 18:33:53 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:46.520 18:33:53 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:46.520 18:33:53 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:46.520 18:33:53 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:46.520 18:33:53 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:46.520 18:33:53 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:46.520 nvmf_trace.0 00:17:46.520 18:33:53 -- common/autotest_common.sh@811 -- # return 0 00:17:46.520 18:33:53 -- target/tls.sh@16 -- # killprocess 89259 00:17:46.520 18:33:53 -- common/autotest_common.sh@926 -- # '[' -z 89259 ']' 00:17:46.520 18:33:53 -- common/autotest_common.sh@930 -- # kill -0 89259 00:17:46.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89259) - No such process 00:17:46.520 Process with pid 89259 is not found 00:17:46.520 18:33:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89259 is not found' 00:17:46.520 18:33:53 -- target/tls.sh@17 -- # nvmftestfini 00:17:46.520 18:33:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:46.520 18:33:53 -- nvmf/common.sh@116 -- # sync 00:17:46.778 18:33:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:46.778 18:33:53 -- nvmf/common.sh@119 -- # set +e 00:17:46.778 18:33:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:46.778 18:33:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:46.778 rmmod nvme_tcp 00:17:46.778 rmmod nvme_fabrics 00:17:46.778 rmmod nvme_keyring 00:17:46.778 18:33:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:46.778 18:33:54 -- nvmf/common.sh@123 -- # set -e 00:17:46.778 18:33:54 -- nvmf/common.sh@124 -- # return 0 00:17:46.778 18:33:54 -- nvmf/common.sh@477 -- # '[' -n 89215 ']' 00:17:46.778 18:33:54 -- nvmf/common.sh@478 -- # killprocess 89215 00:17:46.779 18:33:54 -- common/autotest_common.sh@926 -- # '[' -z 89215 ']' 00:17:46.779 18:33:54 -- common/autotest_common.sh@930 -- # kill -0 89215 00:17:46.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89215) - No such process 00:17:46.779 Process with pid 89215 is not found 00:17:46.779 18:33:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89215 is not found' 00:17:46.779 18:33:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:46.779 18:33:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:46.779 18:33:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:46.779 18:33:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.779 18:33:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:46.779 18:33:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.779 18:33:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.779 18:33:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.779 18:33:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:46.779 18:33:54 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.779 ************************************ 00:17:46.779 END TEST nvmf_tls 00:17:46.779 ************************************ 00:17:46.779 00:17:46.779 real 1m9.373s 00:17:46.779 user 1m44.706s 00:17:46.779 sys 0m25.250s 00:17:46.779 18:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.779 18:33:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.779 18:33:54 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:46.779 18:33:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:46.779 18:33:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:46.779 18:33:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.779 ************************************ 00:17:46.779 START TEST nvmf_fips 00:17:46.779 ************************************ 00:17:46.779 18:33:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:46.779 * Looking for test storage... 00:17:47.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:47.039 18:33:54 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.039 18:33:54 -- nvmf/common.sh@7 -- # uname -s 00:17:47.039 18:33:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.039 18:33:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.039 18:33:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.039 18:33:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.039 18:33:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.039 18:33:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.039 18:33:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.039 18:33:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.039 18:33:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.039 18:33:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.039 18:33:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:17:47.039 18:33:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:17:47.039 18:33:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.039 18:33:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.039 18:33:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.039 18:33:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.039 18:33:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.039 18:33:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.039 18:33:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.039 18:33:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.039 18:33:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.039 18:33:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.039 18:33:54 -- paths/export.sh@5 -- # export PATH 00:17:47.039 18:33:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.039 18:33:54 -- nvmf/common.sh@46 -- # : 0 00:17:47.039 18:33:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:47.039 18:33:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:47.039 18:33:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:47.039 18:33:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.039 18:33:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.039 18:33:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:47.039 18:33:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:47.039 18:33:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:47.039 18:33:54 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.039 18:33:54 -- fips/fips.sh@89 -- # check_openssl_version 00:17:47.039 18:33:54 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:47.039 18:33:54 -- fips/fips.sh@85 -- # openssl version 00:17:47.039 18:33:54 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:47.039 18:33:54 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:47.039 18:33:54 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:47.039 18:33:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:47.039 18:33:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:47.039 18:33:54 -- scripts/common.sh@335 -- # IFS=.-: 00:17:47.039 18:33:54 -- scripts/common.sh@335 -- # read -ra ver1 00:17:47.039 18:33:54 -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.039 18:33:54 -- scripts/common.sh@336 -- # read -ra ver2 00:17:47.039 18:33:54 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:47.039 18:33:54 -- scripts/common.sh@339 -- # ver1_l=3 00:17:47.039 18:33:54 -- scripts/common.sh@340 -- # ver2_l=3 00:17:47.039 18:33:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:47.039 18:33:54 -- scripts/common.sh@343 -- # case "$op" in 00:17:47.039 18:33:54 -- scripts/common.sh@347 -- # : 1 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # decimal 3 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=3 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 3 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # decimal 3 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=3 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 3 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:47.039 18:33:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:47.039 18:33:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v++ )) 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # decimal 0 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=0 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 0 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # decimal 0 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=0 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 0 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:47.039 18:33:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:47.039 18:33:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v++ )) 00:17:47.039 18:33:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # decimal 9 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=9 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 9 00:17:47.039 18:33:54 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # decimal 0 00:17:47.039 18:33:54 -- scripts/common.sh@352 -- # local d=0 00:17:47.039 18:33:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:47.039 18:33:54 -- scripts/common.sh@354 -- # echo 0 00:17:47.039 18:33:54 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:47.039 18:33:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:47.039 18:33:54 -- scripts/common.sh@366 -- # return 0 00:17:47.039 18:33:54 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:47.039 18:33:54 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:47.039 18:33:54 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:47.039 18:33:54 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:47.039 18:33:54 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:47.039 18:33:54 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:47.039 18:33:54 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:47.039 18:33:54 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:47.039 18:33:54 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:47.039 18:33:54 -- fips/fips.sh@114 -- # build_openssl_config 00:17:47.039 18:33:54 -- fips/fips.sh@37 -- # cat 00:17:47.039 18:33:54 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:47.039 18:33:54 -- fips/fips.sh@58 -- # cat - 00:17:47.039 18:33:54 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:47.039 18:33:54 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:47.039 18:33:54 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:47.039 18:33:54 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:47.039 18:33:54 -- fips/fips.sh@117 -- # openssl list -providers 00:17:47.039 18:33:54 -- fips/fips.sh@117 -- # grep name 00:17:47.039 18:33:54 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:47.039 18:33:54 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:47.039 18:33:54 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:47.039 18:33:54 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:47.039 18:33:54 -- common/autotest_common.sh@640 -- # local es=0 00:17:47.039 18:33:54 -- fips/fips.sh@128 -- # : 00:17:47.039 18:33:54 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:47.039 18:33:54 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:47.039 18:33:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:47.039 18:33:54 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:47.039 18:33:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:47.039 18:33:54 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:47.039 18:33:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:47.040 18:33:54 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:47.040 18:33:54 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:47.040 18:33:54 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:47.040 Error setting digest 00:17:47.040 00222603967F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:47.040 00222603967F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:47.040 18:33:54 -- common/autotest_common.sh@643 -- # es=1 00:17:47.040 18:33:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:47.040 18:33:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:47.040 18:33:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:47.040 18:33:54 -- fips/fips.sh@131 -- # nvmftestinit 00:17:47.040 18:33:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:47.040 18:33:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.040 18:33:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:47.040 18:33:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:47.040 18:33:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:47.040 18:33:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.040 18:33:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.040 18:33:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.040 18:33:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:47.040 18:33:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:47.040 18:33:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:47.040 18:33:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:47.040 18:33:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:47.040 18:33:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:47.040 18:33:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.040 18:33:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.040 18:33:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:47.040 18:33:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:47.040 18:33:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.040 18:33:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.040 18:33:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.040 18:33:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.040 18:33:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.040 18:33:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.040 18:33:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.040 18:33:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.040 18:33:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:47.040 18:33:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:47.040 Cannot find device "nvmf_tgt_br" 00:17:47.040 18:33:54 -- nvmf/common.sh@154 -- # true 00:17:47.040 18:33:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.040 Cannot find device "nvmf_tgt_br2" 00:17:47.040 18:33:54 -- nvmf/common.sh@155 -- # true 00:17:47.040 18:33:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:47.040 18:33:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:47.298 Cannot find device "nvmf_tgt_br" 00:17:47.298 18:33:54 -- nvmf/common.sh@157 -- # true 00:17:47.298 18:33:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:47.298 Cannot find device "nvmf_tgt_br2" 00:17:47.298 18:33:54 -- nvmf/common.sh@158 -- # true 00:17:47.298 18:33:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:47.298 18:33:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:47.298 18:33:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.298 18:33:54 -- nvmf/common.sh@161 -- # true 00:17:47.298 18:33:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.298 18:33:54 -- nvmf/common.sh@162 -- # true 00:17:47.298 18:33:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.298 18:33:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.298 18:33:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.298 18:33:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.298 18:33:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.299 18:33:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.299 18:33:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.299 18:33:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:47.299 18:33:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:47.299 18:33:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:47.299 18:33:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:47.299 18:33:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:47.299 18:33:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:47.299 18:33:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.299 18:33:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.299 18:33:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.299 18:33:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:47.299 18:33:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:47.299 18:33:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.299 18:33:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.299 18:33:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.557 18:33:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.557 18:33:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.557 18:33:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:47.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:47.557 00:17:47.557 --- 10.0.0.2 ping statistics --- 00:17:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.557 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:47.557 18:33:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:47.557 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.557 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:47.557 00:17:47.557 --- 10.0.0.3 ping statistics --- 00:17:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.557 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:47.557 18:33:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:47.557 00:17:47.557 --- 10.0.0.1 ping statistics --- 00:17:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.557 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:47.557 18:33:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.557 18:33:54 -- nvmf/common.sh@421 -- # return 0 00:17:47.557 18:33:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:47.557 18:33:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.557 18:33:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:47.558 18:33:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:47.558 18:33:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.558 18:33:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:47.558 18:33:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:47.558 18:33:54 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:47.558 18:33:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:47.558 18:33:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:47.558 18:33:54 -- common/autotest_common.sh@10 -- # set +x 00:17:47.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.558 18:33:54 -- nvmf/common.sh@469 -- # nvmfpid=89631 00:17:47.558 18:33:54 -- nvmf/common.sh@470 -- # waitforlisten 89631 00:17:47.558 18:33:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.558 18:33:54 -- common/autotest_common.sh@819 -- # '[' -z 89631 ']' 00:17:47.558 18:33:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.558 18:33:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:47.558 18:33:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.558 18:33:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:47.558 18:33:54 -- common/autotest_common.sh@10 -- # set +x 00:17:47.558 [2024-07-14 18:33:54.852676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:47.558 [2024-07-14 18:33:54.852769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.816 [2024-07-14 18:33:54.987545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.816 [2024-07-14 18:33:55.048915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:47.816 [2024-07-14 18:33:55.049058] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.816 [2024-07-14 18:33:55.049071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.816 [2024-07-14 18:33:55.049079] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.816 [2024-07-14 18:33:55.049104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.383 18:33:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:48.383 18:33:55 -- common/autotest_common.sh@852 -- # return 0 00:17:48.383 18:33:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.383 18:33:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:48.383 18:33:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.383 18:33:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.383 18:33:55 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:48.383 18:33:55 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:48.383 18:33:55 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.383 18:33:55 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:48.383 18:33:55 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.383 18:33:55 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.383 18:33:55 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.383 18:33:55 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.642 [2024-07-14 18:33:56.044458] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.642 [2024-07-14 18:33:56.060419] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.642 [2024-07-14 18:33:56.060696] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.900 malloc0 00:17:48.900 18:33:56 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.900 18:33:56 -- fips/fips.sh@148 -- # bdevperf_pid=89683 00:17:48.900 18:33:56 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.900 18:33:56 -- fips/fips.sh@149 -- # waitforlisten 89683 /var/tmp/bdevperf.sock 00:17:48.900 18:33:56 -- common/autotest_common.sh@819 -- # '[' -z 89683 ']' 00:17:48.900 18:33:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.900 18:33:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.900 18:33:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.900 18:33:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.900 18:33:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.900 [2024-07-14 18:33:56.197265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:48.900 [2024-07-14 18:33:56.197564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89683 ] 00:17:49.159 [2024-07-14 18:33:56.337519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.159 [2024-07-14 18:33:56.429980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.726 18:33:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.726 18:33:57 -- common/autotest_common.sh@852 -- # return 0 00:17:49.726 18:33:57 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:49.985 [2024-07-14 18:33:57.376585] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.244 TLSTESTn1 00:17:50.244 18:33:57 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.244 Running I/O for 10 seconds... 00:18:00.310 00:18:00.310 Latency(us) 00:18:00.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:00.310 Verification LBA range: start 0x0 length 0x2000 00:18:00.310 TLSTESTn1 : 10.01 5668.09 22.14 0.00 0.00 22548.07 5362.04 27525.12 00:18:00.310 =================================================================================================================== 00:18:00.310 Total : 5668.09 22.14 0.00 0.00 22548.07 5362.04 27525.12 00:18:00.310 0 00:18:00.310 18:34:07 -- fips/fips.sh@1 -- # cleanup 00:18:00.310 18:34:07 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:00.310 18:34:07 -- common/autotest_common.sh@796 -- # type=--id 00:18:00.310 18:34:07 -- common/autotest_common.sh@797 -- # id=0 00:18:00.310 18:34:07 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:00.310 18:34:07 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:00.310 18:34:07 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:00.310 18:34:07 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:00.310 18:34:07 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:00.310 18:34:07 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:00.310 nvmf_trace.0 00:18:00.310 18:34:07 -- common/autotest_common.sh@811 -- # return 0 00:18:00.310 18:34:07 -- fips/fips.sh@16 -- # killprocess 89683 00:18:00.310 18:34:07 -- common/autotest_common.sh@926 -- # '[' -z 89683 ']' 00:18:00.310 18:34:07 -- common/autotest_common.sh@930 -- # kill -0 89683 00:18:00.310 18:34:07 -- common/autotest_common.sh@931 -- # uname 00:18:00.310 18:34:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:00.310 18:34:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89683 00:18:00.310 18:34:07 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:00.310 18:34:07 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:00.310 killing process with pid 89683 00:18:00.310 18:34:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89683' 00:18:00.310 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.310 00:18:00.310 Latency(us) 00:18:00.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.310 =================================================================================================================== 00:18:00.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.310 18:34:07 -- common/autotest_common.sh@945 -- # kill 89683 00:18:00.310 18:34:07 -- common/autotest_common.sh@950 -- # wait 89683 00:18:00.568 18:34:07 -- fips/fips.sh@17 -- # nvmftestfini 00:18:00.568 18:34:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:00.568 18:34:07 -- nvmf/common.sh@116 -- # sync 00:18:00.568 18:34:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:00.568 18:34:07 -- nvmf/common.sh@119 -- # set +e 00:18:00.568 18:34:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:00.568 18:34:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:00.568 rmmod nvme_tcp 00:18:00.568 rmmod nvme_fabrics 00:18:00.568 rmmod nvme_keyring 00:18:00.827 18:34:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:00.827 18:34:08 -- nvmf/common.sh@123 -- # set -e 00:18:00.827 18:34:08 -- nvmf/common.sh@124 -- # return 0 00:18:00.827 18:34:08 -- nvmf/common.sh@477 -- # '[' -n 89631 ']' 00:18:00.827 18:34:08 -- nvmf/common.sh@478 -- # killprocess 89631 00:18:00.827 18:34:08 -- common/autotest_common.sh@926 -- # '[' -z 89631 ']' 00:18:00.827 18:34:08 -- common/autotest_common.sh@930 -- # kill -0 89631 00:18:00.827 18:34:08 -- common/autotest_common.sh@931 -- # uname 00:18:00.827 18:34:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:00.827 18:34:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89631 00:18:00.827 18:34:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:00.827 18:34:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:00.827 killing process with pid 89631 00:18:00.827 18:34:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89631' 00:18:00.827 18:34:08 -- common/autotest_common.sh@945 -- # kill 89631 00:18:00.827 18:34:08 -- common/autotest_common.sh@950 -- # wait 89631 00:18:00.827 18:34:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:00.827 18:34:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:00.827 18:34:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:00.827 18:34:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.827 18:34:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:00.827 18:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.827 18:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.827 18:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.086 18:34:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:01.086 18:34:08 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:01.086 ************************************ 00:18:01.086 END TEST nvmf_fips 00:18:01.086 ************************************ 00:18:01.086 00:18:01.086 real 0m14.166s 00:18:01.086 user 0m18.520s 00:18:01.086 sys 0m6.133s 00:18:01.086 18:34:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.086 18:34:08 -- common/autotest_common.sh@10 -- # set +x 00:18:01.086 18:34:08 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:01.086 18:34:08 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:01.086 18:34:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.086 18:34:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.086 18:34:08 -- common/autotest_common.sh@10 -- # set +x 00:18:01.086 ************************************ 00:18:01.086 START TEST nvmf_fuzz 00:18:01.086 ************************************ 00:18:01.086 18:34:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:01.086 * Looking for test storage... 00:18:01.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:01.086 18:34:08 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.086 18:34:08 -- nvmf/common.sh@7 -- # uname -s 00:18:01.086 18:34:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.086 18:34:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.086 18:34:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.086 18:34:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.086 18:34:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.086 18:34:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.086 18:34:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.086 18:34:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.086 18:34:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.086 18:34:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:18:01.086 18:34:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:18:01.086 18:34:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.086 18:34:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.086 18:34:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.086 18:34:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.086 18:34:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.086 18:34:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.086 18:34:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.086 18:34:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.086 18:34:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.086 18:34:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.086 18:34:08 -- paths/export.sh@5 -- # export PATH 00:18:01.086 18:34:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.086 18:34:08 -- nvmf/common.sh@46 -- # : 0 00:18:01.086 18:34:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.086 18:34:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.086 18:34:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.086 18:34:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.086 18:34:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.086 18:34:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.086 18:34:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.086 18:34:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.086 18:34:08 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:01.086 18:34:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.086 18:34:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.086 18:34:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.086 18:34:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.086 18:34:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.086 18:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.086 18:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.086 18:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.086 18:34:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:01.086 18:34:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:01.086 18:34:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.086 18:34:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.086 18:34:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:01.086 18:34:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:01.086 18:34:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.086 18:34:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.086 18:34:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.086 18:34:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.086 18:34:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.086 18:34:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.086 18:34:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.086 18:34:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.086 18:34:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:01.086 18:34:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:01.086 Cannot find device "nvmf_tgt_br" 00:18:01.086 18:34:08 -- nvmf/common.sh@154 -- # true 00:18:01.086 18:34:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.086 Cannot find device "nvmf_tgt_br2" 00:18:01.086 18:34:08 -- nvmf/common.sh@155 -- # true 00:18:01.086 18:34:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:01.086 18:34:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:01.086 Cannot find device "nvmf_tgt_br" 00:18:01.086 18:34:08 -- nvmf/common.sh@157 -- # true 00:18:01.086 18:34:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:01.086 Cannot find device "nvmf_tgt_br2" 00:18:01.086 18:34:08 -- nvmf/common.sh@158 -- # true 00:18:01.086 18:34:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:01.344 18:34:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:01.344 18:34:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.344 18:34:08 -- nvmf/common.sh@161 -- # true 00:18:01.344 18:34:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.344 18:34:08 -- nvmf/common.sh@162 -- # true 00:18:01.344 18:34:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.344 18:34:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.344 18:34:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.344 18:34:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.344 18:34:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.344 18:34:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.344 18:34:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.344 18:34:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:01.344 18:34:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:01.344 18:34:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:01.344 18:34:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:01.344 18:34:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:01.344 18:34:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:01.344 18:34:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.344 18:34:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.344 18:34:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.344 18:34:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:01.344 18:34:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:01.344 18:34:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.344 18:34:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.344 18:34:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.344 18:34:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.344 18:34:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.344 18:34:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:01.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:01.344 00:18:01.344 --- 10.0.0.2 ping statistics --- 00:18:01.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.344 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:01.344 18:34:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:01.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:01.344 00:18:01.344 --- 10.0.0.3 ping statistics --- 00:18:01.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.344 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:01.344 18:34:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:01.344 00:18:01.344 --- 10.0.0.1 ping statistics --- 00:18:01.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.344 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:01.344 18:34:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.344 18:34:08 -- nvmf/common.sh@421 -- # return 0 00:18:01.344 18:34:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:01.344 18:34:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.344 18:34:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:01.344 18:34:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:01.344 18:34:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.344 18:34:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:01.344 18:34:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:01.601 18:34:08 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90021 00:18:01.601 18:34:08 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.601 18:34:08 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:01.601 18:34:08 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90021 00:18:01.601 18:34:08 -- common/autotest_common.sh@819 -- # '[' -z 90021 ']' 00:18:01.601 18:34:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.601 18:34:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.601 18:34:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.601 18:34:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.601 18:34:08 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 18:34:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.533 18:34:09 -- common/autotest_common.sh@852 -- # return 0 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.533 18:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.533 18:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 18:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:02.533 18:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.533 18:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 Malloc0 00:18:02.533 18:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.533 18:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.533 18:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 18:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.533 18:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.533 18:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 18:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.533 18:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.533 18:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 18:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:02.533 18:34:09 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:03.098 Shutting down the fuzz application 00:18:03.098 18:34:10 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:03.356 Shutting down the fuzz application 00:18:03.356 18:34:10 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.356 18:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.356 18:34:10 -- common/autotest_common.sh@10 -- # set +x 00:18:03.356 18:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.356 18:34:10 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:03.356 18:34:10 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:03.356 18:34:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:03.356 18:34:10 -- nvmf/common.sh@116 -- # sync 00:18:03.356 18:34:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:03.356 18:34:10 -- nvmf/common.sh@119 -- # set +e 00:18:03.356 18:34:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:03.356 18:34:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:03.356 rmmod nvme_tcp 00:18:03.356 rmmod nvme_fabrics 00:18:03.356 rmmod nvme_keyring 00:18:03.356 18:34:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:03.356 18:34:10 -- nvmf/common.sh@123 -- # set -e 00:18:03.356 18:34:10 -- nvmf/common.sh@124 -- # return 0 00:18:03.356 18:34:10 -- nvmf/common.sh@477 -- # '[' -n 90021 ']' 00:18:03.356 18:34:10 -- nvmf/common.sh@478 -- # killprocess 90021 00:18:03.356 18:34:10 -- common/autotest_common.sh@926 -- # '[' -z 90021 ']' 00:18:03.356 18:34:10 -- common/autotest_common.sh@930 -- # kill -0 90021 00:18:03.356 18:34:10 -- common/autotest_common.sh@931 -- # uname 00:18:03.356 18:34:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:03.356 18:34:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90021 00:18:03.356 18:34:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:03.356 18:34:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:03.356 killing process with pid 90021 00:18:03.356 18:34:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90021' 00:18:03.356 18:34:10 -- common/autotest_common.sh@945 -- # kill 90021 00:18:03.356 18:34:10 -- common/autotest_common.sh@950 -- # wait 90021 00:18:03.614 18:34:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:03.614 18:34:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:03.614 18:34:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:03.614 18:34:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.614 18:34:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:03.614 18:34:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.614 18:34:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.614 18:34:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.614 18:34:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:03.614 18:34:10 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:03.614 00:18:03.614 real 0m2.639s 00:18:03.614 user 0m2.738s 00:18:03.614 sys 0m0.656s 00:18:03.614 18:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.614 18:34:10 -- common/autotest_common.sh@10 -- # set +x 00:18:03.614 ************************************ 00:18:03.614 END TEST nvmf_fuzz 00:18:03.614 ************************************ 00:18:03.614 18:34:11 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:03.614 18:34:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:03.614 18:34:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.614 18:34:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.614 ************************************ 00:18:03.614 START TEST nvmf_multiconnection 00:18:03.614 ************************************ 00:18:03.614 18:34:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:03.872 * Looking for test storage... 00:18:03.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:03.872 18:34:11 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:03.872 18:34:11 -- nvmf/common.sh@7 -- # uname -s 00:18:03.872 18:34:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.872 18:34:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.872 18:34:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.872 18:34:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.872 18:34:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.872 18:34:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.872 18:34:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.872 18:34:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.872 18:34:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.872 18:34:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:18:03.872 18:34:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:18:03.872 18:34:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.872 18:34:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.872 18:34:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:03.872 18:34:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.872 18:34:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.872 18:34:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.872 18:34:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.872 18:34:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.872 18:34:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.872 18:34:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.872 18:34:11 -- paths/export.sh@5 -- # export PATH 00:18:03.872 18:34:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.872 18:34:11 -- nvmf/common.sh@46 -- # : 0 00:18:03.872 18:34:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.872 18:34:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.872 18:34:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.872 18:34:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.872 18:34:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.872 18:34:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.872 18:34:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.872 18:34:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.872 18:34:11 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.872 18:34:11 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.872 18:34:11 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:03.872 18:34:11 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:03.872 18:34:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:03.872 18:34:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.872 18:34:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.872 18:34:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.872 18:34:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.872 18:34:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.872 18:34:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.872 18:34:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.872 18:34:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:03.872 18:34:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:03.872 18:34:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.872 18:34:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.872 18:34:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:03.872 18:34:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:03.872 18:34:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:03.872 18:34:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:03.872 18:34:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:03.872 18:34:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.872 18:34:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:03.872 18:34:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:03.872 18:34:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:03.872 18:34:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:03.872 18:34:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:03.872 18:34:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:03.872 Cannot find device "nvmf_tgt_br" 00:18:03.872 18:34:11 -- nvmf/common.sh@154 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.872 Cannot find device "nvmf_tgt_br2" 00:18:03.872 18:34:11 -- nvmf/common.sh@155 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:03.872 18:34:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:03.872 Cannot find device "nvmf_tgt_br" 00:18:03.872 18:34:11 -- nvmf/common.sh@157 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:03.872 Cannot find device "nvmf_tgt_br2" 00:18:03.872 18:34:11 -- nvmf/common.sh@158 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:03.872 18:34:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:03.872 18:34:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.872 18:34:11 -- nvmf/common.sh@161 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.872 18:34:11 -- nvmf/common.sh@162 -- # true 00:18:03.872 18:34:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.872 18:34:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.872 18:34:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.872 18:34:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.146 18:34:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.146 18:34:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.146 18:34:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.146 18:34:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:04.146 18:34:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:04.146 18:34:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:04.146 18:34:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:04.146 18:34:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:04.146 18:34:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:04.146 18:34:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.146 18:34:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.146 18:34:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.146 18:34:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:04.146 18:34:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:04.146 18:34:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.146 18:34:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.146 18:34:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.146 18:34:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.146 18:34:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.146 18:34:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:04.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:04.146 00:18:04.146 --- 10.0.0.2 ping statistics --- 00:18:04.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.146 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:04.146 18:34:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:04.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:04.146 00:18:04.146 --- 10.0.0.3 ping statistics --- 00:18:04.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.146 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:04.146 18:34:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:18:04.146 00:18:04.146 --- 10.0.0.1 ping statistics --- 00:18:04.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.146 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:04.146 18:34:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.146 18:34:11 -- nvmf/common.sh@421 -- # return 0 00:18:04.146 18:34:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.146 18:34:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.146 18:34:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.146 18:34:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.146 18:34:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.146 18:34:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.146 18:34:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.146 18:34:11 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:04.146 18:34:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.146 18:34:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:04.146 18:34:11 -- common/autotest_common.sh@10 -- # set +x 00:18:04.146 18:34:11 -- nvmf/common.sh@469 -- # nvmfpid=90226 00:18:04.146 18:34:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:04.146 18:34:11 -- nvmf/common.sh@470 -- # waitforlisten 90226 00:18:04.146 18:34:11 -- common/autotest_common.sh@819 -- # '[' -z 90226 ']' 00:18:04.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.146 18:34:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.146 18:34:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.146 18:34:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.146 18:34:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.146 18:34:11 -- common/autotest_common.sh@10 -- # set +x 00:18:04.146 [2024-07-14 18:34:11.519372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:04.146 [2024-07-14 18:34:11.520161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.413 [2024-07-14 18:34:11.654945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.414 [2024-07-14 18:34:11.733556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.414 [2024-07-14 18:34:11.733893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.414 [2024-07-14 18:34:11.733922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.414 [2024-07-14 18:34:11.733938] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.414 [2024-07-14 18:34:11.734019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.414 [2024-07-14 18:34:11.734286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.414 [2024-07-14 18:34:11.734945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.414 [2024-07-14 18:34:11.735002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.346 18:34:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:05.346 18:34:12 -- common/autotest_common.sh@852 -- # return 0 00:18:05.346 18:34:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.346 18:34:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.346 18:34:12 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 [2024-07-14 18:34:12.619460] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:05.346 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.346 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 Malloc1 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 [2024-07-14 18:34:12.705118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.346 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 Malloc2 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.346 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.346 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.346 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:05.346 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.346 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 Malloc3 00:18:05.604 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.604 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:05.604 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.604 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.604 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:05.604 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.604 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.604 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:05.604 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.604 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.604 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.604 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:05.604 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 Malloc4 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.605 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 Malloc5 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.605 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 Malloc6 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.605 18:34:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:05.605 18:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 Malloc7 00:18:05.605 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:05.605 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.605 18:34:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:05.605 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.605 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.863 18:34:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 Malloc8 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.863 18:34:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 Malloc9 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.863 18:34:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 Malloc10 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.863 18:34:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 Malloc11 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:05.863 18:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.863 18:34:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 18:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.863 18:34:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:05.863 18:34:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.863 18:34:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.120 18:34:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:06.120 18:34:13 -- common/autotest_common.sh@1177 -- # local i=0 00:18:06.120 18:34:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.120 18:34:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:06.120 18:34:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:08.649 18:34:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:08.649 18:34:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:08.649 18:34:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:08.649 18:34:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:08.649 18:34:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.649 18:34:15 -- common/autotest_common.sh@1187 -- # return 0 00:18:08.649 18:34:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.649 18:34:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:08.649 18:34:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:08.649 18:34:15 -- common/autotest_common.sh@1177 -- # local i=0 00:18:08.649 18:34:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.649 18:34:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:08.649 18:34:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:10.555 18:34:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:10.555 18:34:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:10.555 18:34:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:10.555 18:34:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:10.555 18:34:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.555 18:34:17 -- common/autotest_common.sh@1187 -- # return 0 00:18:10.555 18:34:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.555 18:34:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:10.555 18:34:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:10.555 18:34:17 -- common/autotest_common.sh@1177 -- # local i=0 00:18:10.555 18:34:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.555 18:34:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:10.555 18:34:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:12.459 18:34:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:12.459 18:34:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:12.459 18:34:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:12.459 18:34:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:12.459 18:34:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.459 18:34:19 -- common/autotest_common.sh@1187 -- # return 0 00:18:12.459 18:34:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.459 18:34:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:12.717 18:34:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:12.717 18:34:20 -- common/autotest_common.sh@1177 -- # local i=0 00:18:12.717 18:34:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.717 18:34:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:12.717 18:34:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:14.620 18:34:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:14.620 18:34:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:14.620 18:34:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:14.879 18:34:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:14.879 18:34:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.879 18:34:22 -- common/autotest_common.sh@1187 -- # return 0 00:18:14.879 18:34:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.879 18:34:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:14.879 18:34:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:14.879 18:34:22 -- common/autotest_common.sh@1177 -- # local i=0 00:18:14.879 18:34:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.879 18:34:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:14.879 18:34:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:17.422 18:34:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:17.422 18:34:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:17.422 18:34:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:17.422 18:34:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:17.422 18:34:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.422 18:34:24 -- common/autotest_common.sh@1187 -- # return 0 00:18:17.422 18:34:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.422 18:34:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:17.422 18:34:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:17.422 18:34:24 -- common/autotest_common.sh@1177 -- # local i=0 00:18:17.422 18:34:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.422 18:34:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:17.422 18:34:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:19.325 18:34:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:19.325 18:34:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:19.325 18:34:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:19.325 18:34:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:19.325 18:34:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.325 18:34:26 -- common/autotest_common.sh@1187 -- # return 0 00:18:19.325 18:34:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.325 18:34:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:19.325 18:34:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:19.325 18:34:26 -- common/autotest_common.sh@1177 -- # local i=0 00:18:19.325 18:34:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.325 18:34:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:19.325 18:34:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:21.228 18:34:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:21.228 18:34:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:21.228 18:34:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:21.228 18:34:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:21.228 18:34:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.228 18:34:28 -- common/autotest_common.sh@1187 -- # return 0 00:18:21.228 18:34:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.228 18:34:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:21.486 18:34:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:21.486 18:34:28 -- common/autotest_common.sh@1177 -- # local i=0 00:18:21.486 18:34:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.486 18:34:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:21.486 18:34:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:23.389 18:34:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:23.389 18:34:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:23.389 18:34:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:23.648 18:34:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:23.648 18:34:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.648 18:34:30 -- common/autotest_common.sh@1187 -- # return 0 00:18:23.648 18:34:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.648 18:34:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:23.648 18:34:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:23.648 18:34:30 -- common/autotest_common.sh@1177 -- # local i=0 00:18:23.648 18:34:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.648 18:34:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:23.648 18:34:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:26.178 18:34:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:26.178 18:34:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:26.178 18:34:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:26.178 18:34:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:26.178 18:34:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.178 18:34:33 -- common/autotest_common.sh@1187 -- # return 0 00:18:26.178 18:34:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.178 18:34:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:26.178 18:34:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:26.178 18:34:33 -- common/autotest_common.sh@1177 -- # local i=0 00:18:26.178 18:34:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.178 18:34:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:26.178 18:34:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:28.076 18:34:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:28.076 18:34:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:28.076 18:34:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:28.077 18:34:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:28.077 18:34:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.077 18:34:35 -- common/autotest_common.sh@1187 -- # return 0 00:18:28.077 18:34:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.077 18:34:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:28.077 18:34:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:28.077 18:34:35 -- common/autotest_common.sh@1177 -- # local i=0 00:18:28.077 18:34:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.077 18:34:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:28.077 18:34:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:30.004 18:34:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:30.004 18:34:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:30.004 18:34:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:30.262 18:34:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:30.262 18:34:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.262 18:34:37 -- common/autotest_common.sh@1187 -- # return 0 00:18:30.262 18:34:37 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:30.262 [global] 00:18:30.262 thread=1 00:18:30.262 invalidate=1 00:18:30.262 rw=read 00:18:30.262 time_based=1 00:18:30.262 runtime=10 00:18:30.262 ioengine=libaio 00:18:30.262 direct=1 00:18:30.262 bs=262144 00:18:30.262 iodepth=64 00:18:30.262 norandommap=1 00:18:30.262 numjobs=1 00:18:30.262 00:18:30.262 [job0] 00:18:30.262 filename=/dev/nvme0n1 00:18:30.262 [job1] 00:18:30.262 filename=/dev/nvme10n1 00:18:30.262 [job2] 00:18:30.262 filename=/dev/nvme1n1 00:18:30.262 [job3] 00:18:30.262 filename=/dev/nvme2n1 00:18:30.262 [job4] 00:18:30.262 filename=/dev/nvme3n1 00:18:30.262 [job5] 00:18:30.262 filename=/dev/nvme4n1 00:18:30.262 [job6] 00:18:30.262 filename=/dev/nvme5n1 00:18:30.262 [job7] 00:18:30.262 filename=/dev/nvme6n1 00:18:30.262 [job8] 00:18:30.262 filename=/dev/nvme7n1 00:18:30.262 [job9] 00:18:30.262 filename=/dev/nvme8n1 00:18:30.262 [job10] 00:18:30.262 filename=/dev/nvme9n1 00:18:30.262 Could not set queue depth (nvme0n1) 00:18:30.262 Could not set queue depth (nvme10n1) 00:18:30.262 Could not set queue depth (nvme1n1) 00:18:30.262 Could not set queue depth (nvme2n1) 00:18:30.262 Could not set queue depth (nvme3n1) 00:18:30.262 Could not set queue depth (nvme4n1) 00:18:30.262 Could not set queue depth (nvme5n1) 00:18:30.262 Could not set queue depth (nvme6n1) 00:18:30.262 Could not set queue depth (nvme7n1) 00:18:30.262 Could not set queue depth (nvme8n1) 00:18:30.262 Could not set queue depth (nvme9n1) 00:18:30.520 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.520 fio-3.35 00:18:30.520 Starting 11 threads 00:18:42.723 00:18:42.723 job0: (groupid=0, jobs=1): err= 0: pid=90708: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=461, BW=115MiB/s (121MB/s)(1171MiB/10141msec) 00:18:42.723 slat (usec): min=15, max=92009, avg=2060.01, stdev=7033.95 00:18:42.723 clat (msec): min=27, max=262, avg=136.21, stdev=37.90 00:18:42.723 lat (msec): min=27, max=289, avg=138.27, stdev=38.89 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 66], 5.00th=[ 93], 10.00th=[ 102], 20.00th=[ 109], 00:18:42.723 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 129], 00:18:42.723 | 70.00th=[ 150], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 205], 00:18:42.723 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 264], 00:18:42.723 | 99.99th=[ 264] 00:18:42.723 bw ( KiB/s): min=78336, max=152881, per=7.40%, avg=118247.50, stdev=26963.75, samples=20 00:18:42.723 iops : min= 306, max= 597, avg=461.80, stdev=105.34, samples=20 00:18:42.723 lat (msec) : 50=0.83%, 100=7.58%, 250=91.40%, 500=0.19% 00:18:42.723 cpu : usr=0.14%, sys=1.63%, ctx=1000, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=4684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job1: (groupid=0, jobs=1): err= 0: pid=90709: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=643, BW=161MiB/s (169MB/s)(1621MiB/10072msec) 00:18:42.723 slat (usec): min=17, max=75814, avg=1523.14, stdev=5321.33 00:18:42.723 clat (msec): min=13, max=185, avg=97.75, stdev=24.25 00:18:42.723 lat (msec): min=13, max=198, avg=99.27, stdev=25.00 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 38], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 74], 00:18:42.723 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 103], 60.00th=[ 108], 00:18:42.723 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 132], 00:18:42.723 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 186], 99.95th=[ 186], 00:18:42.723 | 99.99th=[ 186] 00:18:42.723 bw ( KiB/s): min=129024, max=233984, per=10.27%, avg=164223.05, stdev=34635.16, samples=20 00:18:42.723 iops : min= 504, max= 914, avg=641.30, stdev=135.31, samples=20 00:18:42.723 lat (msec) : 20=0.06%, 50=1.74%, 100=45.73%, 250=52.47% 00:18:42.723 cpu : usr=0.23%, sys=2.37%, ctx=1334, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=6482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job2: (groupid=0, jobs=1): err= 0: pid=90710: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=905, BW=226MiB/s (237MB/s)(2280MiB/10071msec) 00:18:42.723 slat (usec): min=15, max=75435, avg=1056.27, stdev=3988.61 00:18:42.723 clat (msec): min=16, max=184, avg=69.54, stdev=25.79 00:18:42.723 lat (msec): min=16, max=184, avg=70.60, stdev=26.25 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 42], 00:18:42.723 | 30.00th=[ 49], 40.00th=[ 68], 50.00th=[ 75], 60.00th=[ 80], 00:18:42.723 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 112], 00:18:42.723 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 169], 99.95th=[ 169], 00:18:42.723 | 99.99th=[ 186] 00:18:42.723 bw ( KiB/s): min=141029, max=417469, per=14.50%, avg=231859.20, stdev=89097.68, samples=20 00:18:42.723 iops : min= 550, max= 1630, avg=905.45, stdev=348.05, samples=20 00:18:42.723 lat (msec) : 20=0.14%, 50=30.89%, 100=59.33%, 250=9.64% 00:18:42.723 cpu : usr=0.25%, sys=2.75%, ctx=1718, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=9120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job3: (groupid=0, jobs=1): err= 0: pid=90711: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=578, BW=145MiB/s (152MB/s)(1457MiB/10071msec) 00:18:42.723 slat (usec): min=21, max=119178, avg=1693.86, stdev=6014.83 00:18:42.723 clat (msec): min=23, max=273, avg=108.68, stdev=25.18 00:18:42.723 lat (msec): min=24, max=282, avg=110.38, stdev=25.93 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 52], 5.00th=[ 66], 10.00th=[ 73], 20.00th=[ 90], 00:18:42.723 | 30.00th=[ 100], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 115], 00:18:42.723 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 136], 95.00th=[ 157], 00:18:42.723 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 197], 00:18:42.723 | 99.99th=[ 275] 00:18:42.723 bw ( KiB/s): min=103424, max=229376, per=9.23%, avg=147520.70, stdev=29982.55, samples=20 00:18:42.723 iops : min= 404, max= 896, avg=576.05, stdev=117.14, samples=20 00:18:42.723 lat (msec) : 50=0.72%, 100=29.83%, 250=69.43%, 500=0.02% 00:18:42.723 cpu : usr=0.26%, sys=1.82%, ctx=1352, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=5829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job4: (groupid=0, jobs=1): err= 0: pid=90712: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=697, BW=174MiB/s (183MB/s)(1755MiB/10071msec) 00:18:42.723 slat (usec): min=17, max=113881, avg=1407.23, stdev=5149.29 00:18:42.723 clat (msec): min=18, max=221, avg=90.26, stdev=22.37 00:18:42.723 lat (msec): min=19, max=221, avg=91.67, stdev=22.95 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 36], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 74], 00:18:42.723 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 92], 00:18:42.723 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 127], 00:18:42.723 | 99.00th=[ 148], 99.50th=[ 215], 99.90th=[ 220], 99.95th=[ 222], 00:18:42.723 | 99.99th=[ 222] 00:18:42.723 bw ( KiB/s): min=136431, max=211544, per=11.13%, avg=177990.65, stdev=27847.89, samples=20 00:18:42.723 iops : min= 532, max= 826, avg=695.10, stdev=108.85, samples=20 00:18:42.723 lat (msec) : 20=0.09%, 50=1.89%, 100=69.62%, 250=28.40% 00:18:42.723 cpu : usr=0.24%, sys=2.50%, ctx=1312, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=7020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job5: (groupid=0, jobs=1): err= 0: pid=90713: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=451, BW=113MiB/s (118MB/s)(1145MiB/10148msec) 00:18:42.723 slat (usec): min=21, max=87310, avg=2190.83, stdev=7275.55 00:18:42.723 clat (msec): min=32, max=277, avg=139.43, stdev=38.58 00:18:42.723 lat (msec): min=32, max=277, avg=141.62, stdev=39.60 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 58], 5.00th=[ 94], 10.00th=[ 103], 20.00th=[ 111], 00:18:42.723 | 30.00th=[ 116], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 132], 00:18:42.723 | 70.00th=[ 161], 80.00th=[ 184], 90.00th=[ 197], 95.00th=[ 205], 00:18:42.723 | 99.00th=[ 239], 99.50th=[ 259], 99.90th=[ 279], 99.95th=[ 279], 00:18:42.723 | 99.99th=[ 279] 00:18:42.723 bw ( KiB/s): min=77157, max=149716, per=7.23%, avg=115529.35, stdev=26938.70, samples=20 00:18:42.723 iops : min= 301, max= 584, avg=451.10, stdev=105.14, samples=20 00:18:42.723 lat (msec) : 50=0.44%, 100=7.47%, 250=91.37%, 500=0.72% 00:18:42.723 cpu : usr=0.22%, sys=1.53%, ctx=918, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.723 issued rwts: total=4579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.723 job6: (groupid=0, jobs=1): err= 0: pid=90714: Sun Jul 14 18:34:48 2024 00:18:42.723 read: IOPS=631, BW=158MiB/s (166MB/s)(1591MiB/10069msec) 00:18:42.723 slat (usec): min=21, max=141730, avg=1524.10, stdev=6102.21 00:18:42.723 clat (msec): min=8, max=275, avg=99.66, stdev=27.34 00:18:42.723 lat (msec): min=8, max=302, avg=101.18, stdev=28.12 00:18:42.723 clat percentiles (msec): 00:18:42.723 | 1.00th=[ 56], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 74], 00:18:42.723 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 102], 60.00th=[ 107], 00:18:42.723 | 70.00th=[ 113], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 146], 00:18:42.723 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 264], 99.95th=[ 275], 00:18:42.723 | 99.99th=[ 275] 00:18:42.723 bw ( KiB/s): min=79519, max=218624, per=10.09%, avg=161310.35, stdev=37707.08, samples=20 00:18:42.723 iops : min= 310, max= 854, avg=629.95, stdev=147.33, samples=20 00:18:42.723 lat (msec) : 10=0.06%, 20=0.09%, 50=0.13%, 100=48.34%, 250=51.09% 00:18:42.723 lat (msec) : 500=0.28% 00:18:42.723 cpu : usr=0.24%, sys=1.97%, ctx=1178, majf=0, minf=4097 00:18:42.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:42.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.724 issued rwts: total=6363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.724 job7: (groupid=0, jobs=1): err= 0: pid=90715: Sun Jul 14 18:34:48 2024 00:18:42.724 read: IOPS=578, BW=145MiB/s (152MB/s)(1458MiB/10085msec) 00:18:42.724 slat (usec): min=15, max=73132, avg=1673.06, stdev=5787.11 00:18:42.724 clat (msec): min=21, max=219, avg=108.79, stdev=27.52 00:18:42.724 lat (msec): min=21, max=243, avg=110.46, stdev=28.23 00:18:42.724 clat percentiles (msec): 00:18:42.724 | 1.00th=[ 41], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 90], 00:18:42.724 | 30.00th=[ 102], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 115], 00:18:42.724 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 136], 95.00th=[ 157], 00:18:42.724 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 220], 99.95th=[ 220], 00:18:42.724 | 99.99th=[ 220] 00:18:42.724 bw ( KiB/s): min=83968, max=217165, per=9.23%, avg=147616.65, stdev=33456.61, samples=20 00:18:42.724 iops : min= 328, max= 848, avg=576.45, stdev=130.68, samples=20 00:18:42.724 lat (msec) : 50=2.18%, 100=27.04%, 250=70.79% 00:18:42.724 cpu : usr=0.12%, sys=1.87%, ctx=1168, majf=0, minf=4097 00:18:42.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:42.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.724 issued rwts: total=5833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.724 job8: (groupid=0, jobs=1): err= 0: pid=90716: Sun Jul 14 18:34:48 2024 00:18:42.724 read: IOPS=442, BW=111MiB/s (116MB/s)(1123MiB/10143msec) 00:18:42.724 slat (usec): min=21, max=111650, avg=2231.32, stdev=7504.32 00:18:42.724 clat (msec): min=56, max=266, avg=142.06, stdev=37.22 00:18:42.724 lat (msec): min=56, max=297, avg=144.29, stdev=38.28 00:18:42.724 clat percentiles (msec): 00:18:42.724 | 1.00th=[ 94], 5.00th=[ 104], 10.00th=[ 107], 20.00th=[ 113], 00:18:42.724 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 126], 60.00th=[ 134], 00:18:42.724 | 70.00th=[ 163], 80.00th=[ 186], 90.00th=[ 201], 95.00th=[ 211], 00:18:42.724 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 268], 99.95th=[ 268], 00:18:42.724 | 99.99th=[ 268] 00:18:42.724 bw ( KiB/s): min=69493, max=143360, per=7.09%, avg=113279.05, stdev=25910.28, samples=20 00:18:42.724 iops : min= 271, max= 560, avg=442.35, stdev=101.20, samples=20 00:18:42.724 lat (msec) : 100=3.96%, 250=95.86%, 500=0.18% 00:18:42.724 cpu : usr=0.18%, sys=1.44%, ctx=1006, majf=0, minf=4097 00:18:42.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:42.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.724 issued rwts: total=4491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.724 job9: (groupid=0, jobs=1): err= 0: pid=90717: Sun Jul 14 18:34:48 2024 00:18:42.724 read: IOPS=451, BW=113MiB/s (118MB/s)(1145MiB/10137msec) 00:18:42.724 slat (usec): min=20, max=102521, avg=2120.21, stdev=7855.24 00:18:42.724 clat (usec): min=1224, max=301416, avg=139289.89, stdev=43467.62 00:18:42.724 lat (usec): min=1264, max=301454, avg=141410.10, stdev=44667.60 00:18:42.724 clat percentiles (msec): 00:18:42.724 | 1.00th=[ 8], 5.00th=[ 95], 10.00th=[ 103], 20.00th=[ 109], 00:18:42.724 | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 126], 60.00th=[ 138], 00:18:42.724 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 197], 95.00th=[ 207], 00:18:42.724 | 99.00th=[ 243], 99.50th=[ 264], 99.90th=[ 300], 99.95th=[ 300], 00:18:42.724 | 99.99th=[ 300] 00:18:42.724 bw ( KiB/s): min=75113, max=167246, per=7.24%, avg=115697.55, stdev=29723.30, samples=20 00:18:42.724 iops : min= 293, max= 653, avg=451.80, stdev=116.18, samples=20 00:18:42.724 lat (msec) : 2=0.61%, 4=0.13%, 10=0.50%, 20=0.61%, 50=0.96% 00:18:42.724 lat (msec) : 100=4.04%, 250=92.34%, 500=0.81% 00:18:42.724 cpu : usr=0.24%, sys=1.74%, ctx=937, majf=0, minf=4097 00:18:42.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:42.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.724 issued rwts: total=4581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.724 job10: (groupid=0, jobs=1): err= 0: pid=90718: Sun Jul 14 18:34:48 2024 00:18:42.724 read: IOPS=433, BW=108MiB/s (114MB/s)(1099MiB/10142msec) 00:18:42.724 slat (usec): min=21, max=106407, avg=2258.50, stdev=7941.21 00:18:42.724 clat (msec): min=35, max=303, avg=145.16, stdev=38.93 00:18:42.724 lat (msec): min=35, max=303, avg=147.41, stdev=40.08 00:18:42.724 clat percentiles (msec): 00:18:42.724 | 1.00th=[ 89], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 111], 00:18:42.724 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 129], 60.00th=[ 155], 00:18:42.724 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 201], 95.00th=[ 209], 00:18:42.724 | 99.00th=[ 224], 99.50th=[ 249], 99.90th=[ 305], 99.95th=[ 305], 00:18:42.724 | 99.99th=[ 305] 00:18:42.724 bw ( KiB/s): min=79872, max=146432, per=6.93%, avg=110845.10, stdev=25721.30, samples=20 00:18:42.724 iops : min= 312, max= 572, avg=432.85, stdev=100.51, samples=20 00:18:42.724 lat (msec) : 50=0.32%, 100=4.53%, 250=94.86%, 500=0.30% 00:18:42.724 cpu : usr=0.23%, sys=1.68%, ctx=814, majf=0, minf=4097 00:18:42.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:42.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.724 issued rwts: total=4395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.724 00:18:42.724 Run status group 0 (all jobs): 00:18:42.724 READ: bw=1561MiB/s (1637MB/s), 108MiB/s-226MiB/s (114MB/s-237MB/s), io=15.5GiB (16.6GB), run=10069-10148msec 00:18:42.724 00:18:42.724 Disk stats (read/write): 00:18:42.724 nvme0n1: ios=9240/0, merge=0/0, ticks=1238005/0, in_queue=1238005, util=97.54% 00:18:42.724 nvme10n1: ios=12873/0, merge=0/0, ticks=1240938/0, in_queue=1240938, util=97.80% 00:18:42.724 nvme1n1: ios=18113/0, merge=0/0, ticks=1233153/0, in_queue=1233153, util=97.30% 00:18:42.724 nvme2n1: ios=11590/0, merge=0/0, ticks=1240504/0, in_queue=1240504, util=97.67% 00:18:42.724 nvme3n1: ios=13924/0, merge=0/0, ticks=1240502/0, in_queue=1240502, util=98.01% 00:18:42.724 nvme4n1: ios=9031/0, merge=0/0, ticks=1238409/0, in_queue=1238409, util=98.31% 00:18:42.724 nvme5n1: ios=12598/0, merge=0/0, ticks=1241564/0, in_queue=1241564, util=98.14% 00:18:42.724 nvme6n1: ios=11564/0, merge=0/0, ticks=1240489/0, in_queue=1240489, util=98.08% 00:18:42.724 nvme7n1: ios=8854/0, merge=0/0, ticks=1237265/0, in_queue=1237265, util=98.35% 00:18:42.724 nvme8n1: ios=9035/0, merge=0/0, ticks=1239876/0, in_queue=1239876, util=98.85% 00:18:42.724 nvme9n1: ios=8663/0, merge=0/0, ticks=1240349/0, in_queue=1240349, util=98.95% 00:18:42.724 18:34:48 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:42.724 [global] 00:18:42.724 thread=1 00:18:42.724 invalidate=1 00:18:42.724 rw=randwrite 00:18:42.724 time_based=1 00:18:42.724 runtime=10 00:18:42.724 ioengine=libaio 00:18:42.724 direct=1 00:18:42.724 bs=262144 00:18:42.724 iodepth=64 00:18:42.724 norandommap=1 00:18:42.724 numjobs=1 00:18:42.724 00:18:42.724 [job0] 00:18:42.724 filename=/dev/nvme0n1 00:18:42.724 [job1] 00:18:42.724 filename=/dev/nvme10n1 00:18:42.724 [job2] 00:18:42.724 filename=/dev/nvme1n1 00:18:42.724 [job3] 00:18:42.724 filename=/dev/nvme2n1 00:18:42.724 [job4] 00:18:42.724 filename=/dev/nvme3n1 00:18:42.724 [job5] 00:18:42.724 filename=/dev/nvme4n1 00:18:42.724 [job6] 00:18:42.724 filename=/dev/nvme5n1 00:18:42.724 [job7] 00:18:42.724 filename=/dev/nvme6n1 00:18:42.724 [job8] 00:18:42.724 filename=/dev/nvme7n1 00:18:42.724 [job9] 00:18:42.724 filename=/dev/nvme8n1 00:18:42.724 [job10] 00:18:42.724 filename=/dev/nvme9n1 00:18:42.724 Could not set queue depth (nvme0n1) 00:18:42.724 Could not set queue depth (nvme10n1) 00:18:42.724 Could not set queue depth (nvme1n1) 00:18:42.724 Could not set queue depth (nvme2n1) 00:18:42.724 Could not set queue depth (nvme3n1) 00:18:42.724 Could not set queue depth (nvme4n1) 00:18:42.724 Could not set queue depth (nvme5n1) 00:18:42.724 Could not set queue depth (nvme6n1) 00:18:42.724 Could not set queue depth (nvme7n1) 00:18:42.724 Could not set queue depth (nvme8n1) 00:18:42.724 Could not set queue depth (nvme9n1) 00:18:42.724 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.724 fio-3.35 00:18:42.724 Starting 11 threads 00:18:52.726 00:18:52.726 job0: (groupid=0, jobs=1): err= 0: pid=90914: Sun Jul 14 18:34:58 2024 00:18:52.726 write: IOPS=307, BW=76.9MiB/s (80.7MB/s)(784MiB/10195msec); 0 zone resets 00:18:52.726 slat (usec): min=19, max=48318, avg=3184.84, stdev=5718.91 00:18:52.726 clat (msec): min=10, max=393, avg=204.70, stdev=32.14 00:18:52.726 lat (msec): min=10, max=393, avg=207.89, stdev=32.09 00:18:52.726 clat percentiles (msec): 00:18:52.726 | 1.00th=[ 105], 5.00th=[ 140], 10.00th=[ 157], 20.00th=[ 199], 00:18:52.726 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 213], 00:18:52.726 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 232], 00:18:52.727 | 99.00th=[ 288], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 393], 00:18:52.727 | 99.99th=[ 393] 00:18:52.727 bw ( KiB/s): min=69632, max=109056, per=7.40%, avg=78668.80, stdev=8640.00, samples=20 00:18:52.727 iops : min= 272, max= 426, avg=307.30, stdev=33.75, samples=20 00:18:52.727 lat (msec) : 20=0.16%, 50=0.64%, 100=0.13%, 250=97.74%, 500=1.34% 00:18:52.727 cpu : usr=0.91%, sys=0.84%, ctx=1968, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job1: (groupid=0, jobs=1): err= 0: pid=90915: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=301, BW=75.5MiB/s (79.1MB/s)(769MiB/10195msec); 0 zone resets 00:18:52.727 slat (usec): min=27, max=35684, avg=3246.82, stdev=5895.73 00:18:52.727 clat (msec): min=16, max=401, avg=208.71, stdev=36.44 00:18:52.727 lat (msec): min=16, max=401, avg=211.95, stdev=36.49 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 59], 5.00th=[ 140], 10.00th=[ 150], 20.00th=[ 201], 00:18:52.727 | 30.00th=[ 207], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 220], 00:18:52.727 | 70.00th=[ 224], 80.00th=[ 230], 90.00th=[ 236], 95.00th=[ 241], 00:18:52.727 | 99.00th=[ 296], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 401], 00:18:52.727 | 99.99th=[ 401] 00:18:52.727 bw ( KiB/s): min=69632, max=114688, per=7.26%, avg=77132.80, stdev=10058.21, samples=20 00:18:52.727 iops : min= 272, max= 448, avg=301.30, stdev=39.29, samples=20 00:18:52.727 lat (msec) : 20=0.13%, 50=0.65%, 100=1.04%, 250=96.69%, 500=1.49% 00:18:52.727 cpu : usr=0.88%, sys=1.06%, ctx=2952, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job2: (groupid=0, jobs=1): err= 0: pid=90927: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=678, BW=170MiB/s (178MB/s)(1709MiB/10073msec); 0 zone resets 00:18:52.727 slat (usec): min=18, max=28384, avg=1457.92, stdev=2498.12 00:18:52.727 clat (msec): min=30, max=167, avg=92.82, stdev=10.51 00:18:52.727 lat (msec): min=30, max=167, avg=94.28, stdev=10.42 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:18:52.727 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 93], 00:18:52.727 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 121], 00:18:52.727 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 163], 00:18:52.727 | 99.99th=[ 169] 00:18:52.727 bw ( KiB/s): min=116736, max=183808, per=16.31%, avg=173388.80, stdev=14874.71, samples=20 00:18:52.727 iops : min= 456, max= 718, avg=677.30, stdev=58.10, samples=20 00:18:52.727 lat (msec) : 50=0.18%, 100=93.84%, 250=5.98% 00:18:52.727 cpu : usr=1.51%, sys=1.95%, ctx=8719, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,6836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job3: (groupid=0, jobs=1): err= 0: pid=90928: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=320, BW=80.2MiB/s (84.1MB/s)(816MiB/10169msec); 0 zone resets 00:18:52.727 slat (usec): min=20, max=67367, avg=3060.62, stdev=5421.96 00:18:52.727 clat (msec): min=21, max=367, avg=196.30, stdev=23.55 00:18:52.727 lat (msec): min=21, max=367, avg=199.37, stdev=23.22 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 171], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:18:52.727 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 197], 00:18:52.727 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 207], 95.00th=[ 249], 00:18:52.727 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:18:52.727 | 99.99th=[ 368] 00:18:52.727 bw ( KiB/s): min=58368, max=86016, per=7.70%, avg=81886.00, stdev=6529.80, samples=20 00:18:52.727 iops : min= 228, max= 336, avg=319.85, stdev=25.50, samples=20 00:18:52.727 lat (msec) : 50=0.31%, 100=0.25%, 250=94.51%, 500=4.94% 00:18:52.727 cpu : usr=0.55%, sys=0.87%, ctx=4395, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job4: (groupid=0, jobs=1): err= 0: pid=90929: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=301, BW=75.3MiB/s (78.9MB/s)(768MiB/10197msec); 0 zone resets 00:18:52.727 slat (usec): min=27, max=29931, avg=3179.94, stdev=5704.80 00:18:52.727 clat (msec): min=15, max=398, avg=209.21, stdev=26.42 00:18:52.727 lat (msec): min=15, max=398, avg=212.39, stdev=26.28 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 100], 5.00th=[ 188], 10.00th=[ 197], 20.00th=[ 201], 00:18:52.727 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 213], 00:18:52.727 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:18:52.727 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 401], 00:18:52.727 | 99.99th=[ 401] 00:18:52.727 bw ( KiB/s): min=68608, max=86528, per=7.24%, avg=77002.00, stdev=3479.27, samples=20 00:18:52.727 iops : min= 268, max= 338, avg=300.60, stdev=13.58, samples=20 00:18:52.727 lat (msec) : 20=0.07%, 50=0.26%, 100=0.75%, 250=97.30%, 500=1.63% 00:18:52.727 cpu : usr=0.77%, sys=0.82%, ctx=3194, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job5: (groupid=0, jobs=1): err= 0: pid=90930: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=321, BW=80.4MiB/s (84.3MB/s)(818MiB/10169msec); 0 zone resets 00:18:52.727 slat (usec): min=19, max=79380, avg=3052.95, stdev=5435.07 00:18:52.727 clat (msec): min=81, max=355, avg=195.78, stdev=19.42 00:18:52.727 lat (msec): min=81, max=355, avg=198.83, stdev=18.92 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:18:52.727 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 197], 00:18:52.727 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 207], 95.00th=[ 234], 00:18:52.727 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 342], 99.95th=[ 355], 00:18:52.727 | 99.99th=[ 355] 00:18:52.727 bw ( KiB/s): min=59392, max=86016, per=7.73%, avg=82116.40, stdev=6191.58, samples=20 00:18:52.727 iops : min= 232, max= 336, avg=320.75, stdev=24.18, samples=20 00:18:52.727 lat (msec) : 100=0.24%, 250=98.11%, 500=1.65% 00:18:52.727 cpu : usr=0.77%, sys=0.87%, ctx=2255, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job6: (groupid=0, jobs=1): err= 0: pid=90931: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=303, BW=75.9MiB/s (79.6MB/s)(774MiB/10192msec); 0 zone resets 00:18:52.727 slat (usec): min=27, max=45647, avg=3226.92, stdev=5853.45 00:18:52.727 clat (msec): min=48, max=405, avg=207.37, stdev=34.16 00:18:52.727 lat (msec): min=48, max=405, avg=210.60, stdev=34.16 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 90], 5.00th=[ 138], 10.00th=[ 144], 20.00th=[ 199], 00:18:52.727 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 218], 00:18:52.727 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 236], 00:18:52.727 | 99.00th=[ 292], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 405], 00:18:52.727 | 99.99th=[ 405] 00:18:52.727 bw ( KiB/s): min=69771, max=111104, per=7.30%, avg=77626.15, stdev=9724.74, samples=20 00:18:52.727 iops : min= 272, max= 434, avg=303.20, stdev=38.01, samples=20 00:18:52.727 lat (msec) : 50=0.13%, 100=1.03%, 250=97.35%, 500=1.49% 00:18:52.727 cpu : usr=0.81%, sys=1.10%, ctx=2692, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,3096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job7: (groupid=0, jobs=1): err= 0: pid=90932: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=677, BW=169MiB/s (178MB/s)(1709MiB/10089msec); 0 zone resets 00:18:52.727 slat (usec): min=21, max=35424, avg=1458.10, stdev=2507.12 00:18:52.727 clat (msec): min=7, max=180, avg=92.99, stdev=13.04 00:18:52.727 lat (msec): min=8, max=180, avg=94.45, stdev=13.02 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 88], 00:18:52.727 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 93], 00:18:52.727 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 124], 00:18:52.727 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 176], 00:18:52.727 | 99.99th=[ 182] 00:18:52.727 bw ( KiB/s): min=117995, max=184832, per=16.31%, avg=173402.40, stdev=14927.02, samples=20 00:18:52.727 iops : min= 460, max= 722, avg=677.15, stdev=58.45, samples=20 00:18:52.727 lat (msec) : 10=0.06%, 20=0.19%, 50=0.31%, 100=93.44%, 250=6.00% 00:18:52.727 cpu : usr=1.75%, sys=1.99%, ctx=8236, majf=0, minf=1 00:18:52.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:52.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.727 issued rwts: total=0,6834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.727 job8: (groupid=0, jobs=1): err= 0: pid=90933: Sun Jul 14 18:34:58 2024 00:18:52.727 write: IOPS=313, BW=78.3MiB/s (82.1MB/s)(798MiB/10189msec); 0 zone resets 00:18:52.727 slat (usec): min=18, max=31774, avg=3093.70, stdev=5649.25 00:18:52.727 clat (msec): min=5, max=402, avg=201.03, stdev=39.71 00:18:52.727 lat (msec): min=5, max=402, avg=204.12, stdev=39.97 00:18:52.727 clat percentiles (msec): 00:18:52.727 | 1.00th=[ 68], 5.00th=[ 123], 10.00th=[ 132], 20.00th=[ 194], 00:18:52.727 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 215], 00:18:52.728 | 70.00th=[ 220], 80.00th=[ 224], 90.00th=[ 228], 95.00th=[ 232], 00:18:52.728 | 99.00th=[ 288], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 401], 00:18:52.728 | 99.99th=[ 401] 00:18:52.728 bw ( KiB/s): min=71680, max=124152, per=7.54%, avg=80133.00, stdev=14368.38, samples=20 00:18:52.728 iops : min= 280, max= 484, avg=312.95, stdev=55.98, samples=20 00:18:52.728 lat (msec) : 10=0.03%, 20=0.13%, 50=0.63%, 100=1.53%, 250=96.34% 00:18:52.728 lat (msec) : 500=1.35% 00:18:52.728 cpu : usr=0.65%, sys=1.00%, ctx=2339, majf=0, minf=1 00:18:52.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:52.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.728 issued rwts: total=0,3193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.728 job9: (groupid=0, jobs=1): err= 0: pid=90934: Sun Jul 14 18:34:58 2024 00:18:52.728 write: IOPS=321, BW=80.5MiB/s (84.4MB/s)(819MiB/10175msec); 0 zone resets 00:18:52.728 slat (usec): min=19, max=61920, avg=3046.69, stdev=5368.89 00:18:52.728 clat (msec): min=22, max=371, avg=195.64, stdev=24.51 00:18:52.728 lat (msec): min=22, max=371, avg=198.69, stdev=24.26 00:18:52.728 clat percentiles (msec): 00:18:52.728 | 1.00th=[ 121], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:18:52.728 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 197], 00:18:52.728 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 207], 95.00th=[ 245], 00:18:52.728 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:18:52.728 | 99.99th=[ 372] 00:18:52.728 bw ( KiB/s): min=65667, max=86188, per=7.74%, avg=82284.95, stdev=5525.07, samples=20 00:18:52.728 iops : min= 256, max= 336, avg=321.20, stdev=21.59, samples=20 00:18:52.728 lat (msec) : 50=0.37%, 100=0.49%, 250=96.21%, 500=2.93% 00:18:52.728 cpu : usr=0.58%, sys=1.06%, ctx=1472, majf=0, minf=1 00:18:52.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:52.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.728 issued rwts: total=0,3276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.728 job10: (groupid=0, jobs=1): err= 0: pid=90937: Sun Jul 14 18:34:58 2024 00:18:52.728 write: IOPS=322, BW=80.6MiB/s (84.6MB/s)(820MiB/10172msec); 0 zone resets 00:18:52.728 slat (usec): min=21, max=37531, avg=3043.92, stdev=5322.38 00:18:52.728 clat (msec): min=36, max=361, avg=195.28, stdev=22.11 00:18:52.728 lat (msec): min=36, max=361, avg=198.32, stdev=21.79 00:18:52.728 clat percentiles (msec): 00:18:52.728 | 1.00th=[ 133], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:18:52.728 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 197], 00:18:52.728 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 205], 95.00th=[ 236], 00:18:52.728 | 99.00th=[ 262], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:18:52.728 | 99.99th=[ 363] 00:18:52.728 bw ( KiB/s): min=65536, max=88064, per=7.75%, avg=82363.60, stdev=5301.01, samples=20 00:18:52.728 iops : min= 256, max= 344, avg=321.70, stdev=20.70, samples=20 00:18:52.728 lat (msec) : 50=0.24%, 100=0.49%, 250=96.13%, 500=3.14% 00:18:52.728 cpu : usr=0.65%, sys=1.03%, ctx=2264, majf=0, minf=1 00:18:52.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:52.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:52.728 issued rwts: total=0,3281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:52.728 00:18:52.728 Run status group 0 (all jobs): 00:18:52.728 WRITE: bw=1038MiB/s (1088MB/s), 75.3MiB/s-170MiB/s (78.9MB/s-178MB/s), io=10.3GiB (11.1GB), run=10073-10197msec 00:18:52.728 00:18:52.728 Disk stats (read/write): 00:18:52.728 nvme0n1: ios=49/6144, merge=0/0, ticks=51/1209346, in_queue=1209397, util=97.92% 00:18:52.728 nvme10n1: ios=49/6023, merge=0/0, ticks=41/1207935, in_queue=1207976, util=97.97% 00:18:52.728 nvme1n1: ios=40/13518, merge=0/0, ticks=42/1214331, in_queue=1214373, util=98.06% 00:18:52.728 nvme2n1: ios=5/6397, merge=0/0, ticks=10/1210237, in_queue=1210247, util=97.99% 00:18:52.728 nvme3n1: ios=0/6021, merge=0/0, ticks=0/1211474, in_queue=1211474, util=98.19% 00:18:52.728 nvme4n1: ios=0/6406, merge=0/0, ticks=0/1210423, in_queue=1210423, util=98.20% 00:18:52.728 nvme5n1: ios=0/6063, merge=0/0, ticks=0/1208057, in_queue=1208057, util=98.36% 00:18:52.728 nvme6n1: ios=0/13550, merge=0/0, ticks=0/1217199, in_queue=1217199, util=98.59% 00:18:52.728 nvme7n1: ios=0/6254, merge=0/0, ticks=0/1208042, in_queue=1208042, util=98.64% 00:18:52.728 nvme8n1: ios=0/6434, merge=0/0, ticks=0/1212675, in_queue=1212675, util=98.94% 00:18:52.728 nvme9n1: ios=0/6430, merge=0/0, ticks=0/1210581, in_queue=1210581, util=98.90% 00:18:52.728 18:34:58 -- target/multiconnection.sh@36 -- # sync 00:18:52.728 18:34:58 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:52.728 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:52.728 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:52.728 18:34:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.728 18:34:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.728 18:34:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:52.728 18:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.728 18:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 18:34:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.728 18:34:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.728 18:34:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:52.728 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:52.729 18:34:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:52.729 18:34:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.729 18:34:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.729 18:34:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:52.729 18:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.729 18:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:52.729 18:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.729 18:35:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:52.729 18:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.729 18:35:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.729 18:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.729 18:35:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.729 18:35:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:52.987 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:52.987 18:35:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:52.987 18:35:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.987 18:35:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.987 18:35:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:52.987 18:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.987 18:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:52.987 18:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.987 18:35:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:52.987 18:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.987 18:35:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.987 18:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.987 18:35:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.987 18:35:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:52.987 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:52.987 18:35:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:52.987 18:35:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.987 18:35:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:52.987 18:35:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:52.987 18:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:52.987 18:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.987 18:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:52.987 18:35:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:52.987 18:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.988 18:35:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.988 18:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.988 18:35:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.988 18:35:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:53.247 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:53.247 18:35:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:53.247 18:35:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.247 18:35:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:53.247 18:35:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:53.247 18:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:53.247 18:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:53.247 18:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:53.247 18:35:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:53.247 18:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.247 18:35:00 -- common/autotest_common.sh@10 -- # set +x 00:18:53.247 18:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.247 18:35:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.247 18:35:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:53.247 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:53.247 18:35:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:53.247 18:35:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.247 18:35:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:53.247 18:35:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:53.247 18:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:53.247 18:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:53.247 18:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:53.247 18:35:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:53.247 18:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.247 18:35:00 -- common/autotest_common.sh@10 -- # set +x 00:18:53.247 18:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.247 18:35:00 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:53.247 18:35:00 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:53.247 18:35:00 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:53.247 18:35:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:53.247 18:35:00 -- nvmf/common.sh@116 -- # sync 00:18:53.247 18:35:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:53.247 18:35:00 -- nvmf/common.sh@119 -- # set +e 00:18:53.247 18:35:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:53.247 18:35:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:53.505 rmmod nvme_tcp 00:18:53.505 rmmod nvme_fabrics 00:18:53.505 rmmod nvme_keyring 00:18:53.505 18:35:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:53.505 18:35:00 -- nvmf/common.sh@123 -- # set -e 00:18:53.505 18:35:00 -- nvmf/common.sh@124 -- # return 0 00:18:53.505 18:35:00 -- nvmf/common.sh@477 -- # '[' -n 90226 ']' 00:18:53.505 18:35:00 -- nvmf/common.sh@478 -- # killprocess 90226 00:18:53.505 18:35:00 -- common/autotest_common.sh@926 -- # '[' -z 90226 ']' 00:18:53.505 18:35:00 -- common/autotest_common.sh@930 -- # kill -0 90226 00:18:53.505 18:35:00 -- common/autotest_common.sh@931 -- # uname 00:18:53.505 18:35:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:53.505 18:35:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90226 00:18:53.505 killing process with pid 90226 00:18:53.505 18:35:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:53.505 18:35:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:53.505 18:35:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90226' 00:18:53.505 18:35:00 -- common/autotest_common.sh@945 -- # kill 90226 00:18:53.505 18:35:00 -- common/autotest_common.sh@950 -- # wait 90226 00:18:54.070 18:35:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:54.070 18:35:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:54.070 18:35:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:54.070 18:35:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.070 18:35:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:54.070 18:35:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.070 18:35:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.070 18:35:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.070 18:35:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:54.070 ************************************ 00:18:54.070 END TEST nvmf_multiconnection 00:18:54.070 ************************************ 00:18:54.070 00:18:54.070 real 0m50.420s 00:18:54.070 user 2m53.067s 00:18:54.070 sys 0m22.180s 00:18:54.070 18:35:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.070 18:35:01 -- common/autotest_common.sh@10 -- # set +x 00:18:54.328 18:35:01 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:54.328 18:35:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:54.328 18:35:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:54.328 18:35:01 -- common/autotest_common.sh@10 -- # set +x 00:18:54.328 ************************************ 00:18:54.328 START TEST nvmf_initiator_timeout 00:18:54.328 ************************************ 00:18:54.328 18:35:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:54.328 * Looking for test storage... 00:18:54.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:54.328 18:35:01 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:54.328 18:35:01 -- nvmf/common.sh@7 -- # uname -s 00:18:54.328 18:35:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.328 18:35:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.328 18:35:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.328 18:35:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.328 18:35:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.328 18:35:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.328 18:35:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.328 18:35:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.328 18:35:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.328 18:35:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:18:54.328 18:35:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:18:54.328 18:35:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.328 18:35:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.328 18:35:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:54.328 18:35:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:54.328 18:35:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.328 18:35:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.328 18:35:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.328 18:35:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.328 18:35:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.328 18:35:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.328 18:35:01 -- paths/export.sh@5 -- # export PATH 00:18:54.328 18:35:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.328 18:35:01 -- nvmf/common.sh@46 -- # : 0 00:18:54.328 18:35:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:54.328 18:35:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:54.328 18:35:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:54.328 18:35:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.328 18:35:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.328 18:35:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:54.328 18:35:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:54.328 18:35:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:54.328 18:35:01 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.328 18:35:01 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.328 18:35:01 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:54.328 18:35:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:54.328 18:35:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.328 18:35:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:54.328 18:35:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:54.328 18:35:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:54.328 18:35:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.328 18:35:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.328 18:35:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.328 18:35:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:54.328 18:35:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:54.328 18:35:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.328 18:35:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.328 18:35:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:54.328 18:35:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:54.328 18:35:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:54.328 18:35:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:54.328 18:35:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:54.328 18:35:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.328 18:35:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:54.328 18:35:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:54.328 18:35:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:54.328 18:35:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:54.328 18:35:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:54.328 18:35:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:54.328 Cannot find device "nvmf_tgt_br" 00:18:54.328 18:35:01 -- nvmf/common.sh@154 -- # true 00:18:54.328 18:35:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.329 Cannot find device "nvmf_tgt_br2" 00:18:54.329 18:35:01 -- nvmf/common.sh@155 -- # true 00:18:54.329 18:35:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:54.329 18:35:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:54.329 Cannot find device "nvmf_tgt_br" 00:18:54.329 18:35:01 -- nvmf/common.sh@157 -- # true 00:18:54.329 18:35:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:54.329 Cannot find device "nvmf_tgt_br2" 00:18:54.329 18:35:01 -- nvmf/common.sh@158 -- # true 00:18:54.329 18:35:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:54.329 18:35:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:54.329 18:35:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.329 18:35:01 -- nvmf/common.sh@161 -- # true 00:18:54.329 18:35:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.586 18:35:01 -- nvmf/common.sh@162 -- # true 00:18:54.586 18:35:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:54.586 18:35:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:54.586 18:35:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:54.586 18:35:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:54.586 18:35:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:54.586 18:35:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:54.586 18:35:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:54.586 18:35:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:54.586 18:35:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:54.586 18:35:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:54.586 18:35:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:54.586 18:35:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:54.586 18:35:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:54.586 18:35:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:54.586 18:35:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:54.586 18:35:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:54.586 18:35:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:54.586 18:35:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:54.586 18:35:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:54.586 18:35:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:54.586 18:35:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:54.586 18:35:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:54.586 18:35:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:54.586 18:35:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:54.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:54.587 00:18:54.587 --- 10.0.0.2 ping statistics --- 00:18:54.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.587 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:54.587 18:35:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:54.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:54.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:54.587 00:18:54.587 --- 10.0.0.3 ping statistics --- 00:18:54.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.587 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:54.587 18:35:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:54.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:54.587 00:18:54.587 --- 10.0.0.1 ping statistics --- 00:18:54.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.587 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:54.587 18:35:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.587 18:35:01 -- nvmf/common.sh@421 -- # return 0 00:18:54.587 18:35:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:54.587 18:35:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.587 18:35:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:54.587 18:35:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:54.587 18:35:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.587 18:35:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:54.587 18:35:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:54.587 18:35:01 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:54.587 18:35:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:54.587 18:35:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:54.587 18:35:01 -- common/autotest_common.sh@10 -- # set +x 00:18:54.587 18:35:01 -- nvmf/common.sh@469 -- # nvmfpid=91316 00:18:54.587 18:35:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:54.587 18:35:01 -- nvmf/common.sh@470 -- # waitforlisten 91316 00:18:54.587 18:35:01 -- common/autotest_common.sh@819 -- # '[' -z 91316 ']' 00:18:54.587 18:35:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.587 18:35:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:54.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.587 18:35:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.587 18:35:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:54.587 18:35:01 -- common/autotest_common.sh@10 -- # set +x 00:18:54.587 [2024-07-14 18:35:02.009005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:54.587 [2024-07-14 18:35:02.009075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.845 [2024-07-14 18:35:02.148314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:54.845 [2024-07-14 18:35:02.242396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:54.845 [2024-07-14 18:35:02.242625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.845 [2024-07-14 18:35:02.242644] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.845 [2024-07-14 18:35:02.242656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.845 [2024-07-14 18:35:02.242755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.845 [2024-07-14 18:35:02.242884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.845 [2024-07-14 18:35:02.243742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.845 [2024-07-14 18:35:02.243755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.779 18:35:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.779 18:35:03 -- common/autotest_common.sh@852 -- # return 0 00:18:55.779 18:35:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:55.779 18:35:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 18:35:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 Malloc0 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 Delay0 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 [2024-07-14 18:35:03.123120] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.779 18:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.779 18:35:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 [2024-07-14 18:35:03.155369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.779 18:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.779 18:35:03 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:56.046 18:35:03 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:56.046 18:35:03 -- common/autotest_common.sh@1177 -- # local i=0 00:18:56.046 18:35:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.046 18:35:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:56.046 18:35:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:57.953 18:35:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:57.953 18:35:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:57.953 18:35:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.953 18:35:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:57.953 18:35:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.953 18:35:05 -- common/autotest_common.sh@1187 -- # return 0 00:18:57.953 18:35:05 -- target/initiator_timeout.sh@35 -- # fio_pid=91398 00:18:57.953 18:35:05 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:57.953 18:35:05 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:57.953 [global] 00:18:57.953 thread=1 00:18:57.953 invalidate=1 00:18:57.953 rw=write 00:18:57.953 time_based=1 00:18:57.953 runtime=60 00:18:57.953 ioengine=libaio 00:18:57.953 direct=1 00:18:57.953 bs=4096 00:18:57.953 iodepth=1 00:18:57.953 norandommap=0 00:18:57.953 numjobs=1 00:18:57.953 00:18:58.212 verify_dump=1 00:18:58.212 verify_backlog=512 00:18:58.212 verify_state_save=0 00:18:58.212 do_verify=1 00:18:58.212 verify=crc32c-intel 00:18:58.212 [job0] 00:18:58.212 filename=/dev/nvme0n1 00:18:58.212 Could not set queue depth (nvme0n1) 00:18:58.212 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.212 fio-3.35 00:18:58.212 Starting 1 thread 00:19:01.499 18:35:08 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:01.499 18:35:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.499 18:35:08 -- common/autotest_common.sh@10 -- # set +x 00:19:01.499 true 00:19:01.499 18:35:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.499 18:35:08 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:01.499 18:35:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.499 18:35:08 -- common/autotest_common.sh@10 -- # set +x 00:19:01.499 true 00:19:01.499 18:35:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.499 18:35:08 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:01.499 18:35:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.499 18:35:08 -- common/autotest_common.sh@10 -- # set +x 00:19:01.499 true 00:19:01.499 18:35:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.499 18:35:08 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:01.499 18:35:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.499 18:35:08 -- common/autotest_common.sh@10 -- # set +x 00:19:01.499 true 00:19:01.499 18:35:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.499 18:35:08 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:04.035 18:35:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.035 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:19:04.035 true 00:19:04.035 18:35:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:04.035 18:35:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.035 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:19:04.035 true 00:19:04.035 18:35:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:04.035 18:35:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.035 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:19:04.035 true 00:19:04.035 18:35:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:04.035 18:35:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.035 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:19:04.035 true 00:19:04.035 18:35:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:04.035 18:35:11 -- target/initiator_timeout.sh@54 -- # wait 91398 00:20:00.332 00:20:00.332 job0: (groupid=0, jobs=1): err= 0: pid=91419: Sun Jul 14 18:36:05 2024 00:20:00.332 read: IOPS=711, BW=2847KiB/s (2916kB/s)(167MiB/60001msec) 00:20:00.332 slat (usec): min=12, max=110, avg=16.66, stdev= 5.10 00:20:00.332 clat (usec): min=142, max=748, avg=230.27, stdev=24.40 00:20:00.332 lat (usec): min=180, max=762, avg=246.93, stdev=25.26 00:20:00.332 clat percentiles (usec): 00:20:00.332 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:20:00.332 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:20:00.332 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 277], 00:20:00.332 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 363], 99.95th=[ 404], 00:20:00.332 | 99.99th=[ 644] 00:20:00.332 write: IOPS=716, BW=2867KiB/s (2936kB/s)(168MiB/60001msec); 0 zone resets 00:20:00.332 slat (usec): min=18, max=10402, avg=24.98, stdev=62.85 00:20:00.332 clat (usec): min=6, max=40547k, avg=1120.97, stdev=195517.44 00:20:00.332 lat (usec): min=147, max=40547k, avg=1145.95, stdev=195517.44 00:20:00.332 clat percentiles (usec): 00:20:00.332 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:20:00.332 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:20:00.332 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:20:00.332 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 338], 00:20:00.332 | 99.99th=[ 1942] 00:20:00.332 bw ( KiB/s): min= 2840, max=10312, per=100.00%, avg=8612.10, stdev=1265.71, samples=39 00:20:00.332 iops : min= 710, max= 2578, avg=2153.03, stdev=316.43, samples=39 00:20:00.332 lat (usec) : 10=0.01%, 100=0.01%, 250=91.40%, 500=8.57%, 750=0.02% 00:20:00.332 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:00.332 cpu : usr=0.56%, sys=2.10%, ctx=85747, majf=0, minf=2 00:20:00.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.332 issued rwts: total=42711,43008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:00.332 00:20:00.332 Run status group 0 (all jobs): 00:20:00.332 READ: bw=2847KiB/s (2916kB/s), 2847KiB/s-2847KiB/s (2916kB/s-2916kB/s), io=167MiB (175MB), run=60001-60001msec 00:20:00.332 WRITE: bw=2867KiB/s (2936kB/s), 2867KiB/s-2867KiB/s (2936kB/s-2936kB/s), io=168MiB (176MB), run=60001-60001msec 00:20:00.332 00:20:00.332 Disk stats (read/write): 00:20:00.332 nvme0n1: ios=42759/42765, merge=0/0, ticks=10235/8196, in_queue=18431, util=99.64% 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:00.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:00.332 18:36:05 -- common/autotest_common.sh@1198 -- # local i=0 00:20:00.332 18:36:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:00.332 18:36:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:00.332 18:36:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:00.332 18:36:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:00.332 nvmf hotplug test: fio successful as expected 00:20:00.332 18:36:05 -- common/autotest_common.sh@1210 -- # return 0 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.332 18:36:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.332 18:36:05 -- common/autotest_common.sh@10 -- # set +x 00:20:00.332 18:36:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:00.332 18:36:05 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:00.332 18:36:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:00.332 18:36:05 -- nvmf/common.sh@116 -- # sync 00:20:00.332 18:36:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:00.332 18:36:05 -- nvmf/common.sh@119 -- # set +e 00:20:00.332 18:36:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:00.332 18:36:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:00.332 rmmod nvme_tcp 00:20:00.332 rmmod nvme_fabrics 00:20:00.332 rmmod nvme_keyring 00:20:00.332 18:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:00.332 18:36:05 -- nvmf/common.sh@123 -- # set -e 00:20:00.332 18:36:05 -- nvmf/common.sh@124 -- # return 0 00:20:00.332 18:36:05 -- nvmf/common.sh@477 -- # '[' -n 91316 ']' 00:20:00.332 18:36:05 -- nvmf/common.sh@478 -- # killprocess 91316 00:20:00.332 18:36:05 -- common/autotest_common.sh@926 -- # '[' -z 91316 ']' 00:20:00.332 18:36:05 -- common/autotest_common.sh@930 -- # kill -0 91316 00:20:00.332 18:36:05 -- common/autotest_common.sh@931 -- # uname 00:20:00.332 18:36:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.332 18:36:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91316 00:20:00.332 killing process with pid 91316 00:20:00.332 18:36:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:00.332 18:36:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:00.332 18:36:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91316' 00:20:00.332 18:36:05 -- common/autotest_common.sh@945 -- # kill 91316 00:20:00.332 18:36:05 -- common/autotest_common.sh@950 -- # wait 91316 00:20:00.332 18:36:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:00.332 18:36:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:00.332 18:36:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:00.332 18:36:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.332 18:36:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:00.332 18:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.332 18:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.332 18:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.332 18:36:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:00.332 00:20:00.332 real 1m4.612s 00:20:00.332 user 4m6.004s 00:20:00.332 sys 0m8.669s 00:20:00.332 18:36:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.332 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.332 ************************************ 00:20:00.332 END TEST nvmf_initiator_timeout 00:20:00.332 ************************************ 00:20:00.332 18:36:06 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:00.332 18:36:06 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:00.332 18:36:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.332 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.332 18:36:06 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:00.332 18:36:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.332 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.332 18:36:06 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:00.332 18:36:06 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:00.332 18:36:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:00.332 18:36:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.332 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.332 ************************************ 00:20:00.332 START TEST nvmf_multicontroller 00:20:00.332 ************************************ 00:20:00.332 18:36:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:00.332 * Looking for test storage... 00:20:00.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.332 18:36:06 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.332 18:36:06 -- nvmf/common.sh@7 -- # uname -s 00:20:00.332 18:36:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.332 18:36:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.332 18:36:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.332 18:36:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.332 18:36:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.332 18:36:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.332 18:36:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.332 18:36:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.332 18:36:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.332 18:36:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.332 18:36:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:00.332 18:36:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:00.332 18:36:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.332 18:36:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.333 18:36:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.333 18:36:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.333 18:36:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.333 18:36:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.333 18:36:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.333 18:36:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.333 18:36:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.333 18:36:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.333 18:36:06 -- paths/export.sh@5 -- # export PATH 00:20:00.333 18:36:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.333 18:36:06 -- nvmf/common.sh@46 -- # : 0 00:20:00.333 18:36:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:00.333 18:36:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:00.333 18:36:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.333 18:36:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.333 18:36:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:00.333 18:36:06 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.333 18:36:06 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.333 18:36:06 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:00.333 18:36:06 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:00.333 18:36:06 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.333 18:36:06 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:00.333 18:36:06 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:00.333 18:36:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.333 18:36:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:00.333 18:36:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:00.333 18:36:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:00.333 18:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.333 18:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.333 18:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.333 18:36:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:00.333 18:36:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.333 18:36:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.333 18:36:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.333 18:36:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:00.333 18:36:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.333 18:36:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.333 18:36:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.333 18:36:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.333 18:36:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.333 18:36:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.333 18:36:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.333 18:36:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.333 18:36:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:00.333 18:36:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:00.333 Cannot find device "nvmf_tgt_br" 00:20:00.333 18:36:06 -- nvmf/common.sh@154 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.333 Cannot find device "nvmf_tgt_br2" 00:20:00.333 18:36:06 -- nvmf/common.sh@155 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:00.333 18:36:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:00.333 Cannot find device "nvmf_tgt_br" 00:20:00.333 18:36:06 -- nvmf/common.sh@157 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:00.333 Cannot find device "nvmf_tgt_br2" 00:20:00.333 18:36:06 -- nvmf/common.sh@158 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:00.333 18:36:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:00.333 18:36:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.333 18:36:06 -- nvmf/common.sh@161 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.333 18:36:06 -- nvmf/common.sh@162 -- # true 00:20:00.333 18:36:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.333 18:36:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.333 18:36:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.333 18:36:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.333 18:36:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.333 18:36:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.333 18:36:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.333 18:36:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.333 18:36:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.333 18:36:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:00.333 18:36:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:00.333 18:36:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:00.333 18:36:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:00.333 18:36:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.333 18:36:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.333 18:36:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.333 18:36:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:00.333 18:36:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:00.333 18:36:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.333 18:36:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.333 18:36:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.333 18:36:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.333 18:36:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.333 18:36:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:00.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:00.333 00:20:00.333 --- 10.0.0.2 ping statistics --- 00:20:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.333 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:00.333 18:36:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:00.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:00.333 00:20:00.333 --- 10.0.0.3 ping statistics --- 00:20:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.333 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:00.333 18:36:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:00.333 00:20:00.333 --- 10.0.0.1 ping statistics --- 00:20:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.333 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:00.333 18:36:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.333 18:36:06 -- nvmf/common.sh@421 -- # return 0 00:20:00.333 18:36:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.333 18:36:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.333 18:36:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.333 18:36:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.333 18:36:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.333 18:36:06 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:00.333 18:36:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.333 18:36:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.333 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.333 18:36:06 -- nvmf/common.sh@469 -- # nvmfpid=92248 00:20:00.333 18:36:06 -- nvmf/common.sh@470 -- # waitforlisten 92248 00:20:00.333 18:36:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:00.333 18:36:06 -- common/autotest_common.sh@819 -- # '[' -z 92248 ']' 00:20:00.333 18:36:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.333 18:36:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.333 18:36:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.333 18:36:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.333 18:36:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 [2024-07-14 18:36:06.731957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:00.334 [2024-07-14 18:36:06.732041] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.334 [2024-07-14 18:36:06.873753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:00.334 [2024-07-14 18:36:06.960520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.334 [2024-07-14 18:36:06.960818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.334 [2024-07-14 18:36:06.960850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.334 [2024-07-14 18:36:06.960871] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.334 [2024-07-14 18:36:06.961143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.334 [2024-07-14 18:36:06.961313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.334 [2024-07-14 18:36:06.961338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.334 18:36:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:00.334 18:36:07 -- common/autotest_common.sh@852 -- # return 0 00:20:00.334 18:36:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:00.334 18:36:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.334 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.592 18:36:07 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 [2024-07-14 18:36:07.801344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 Malloc0 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 [2024-07-14 18:36:07.866833] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 [2024-07-14 18:36:07.874757] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 Malloc1 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:00.592 18:36:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.592 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 18:36:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.592 18:36:07 -- host/multicontroller.sh@44 -- # bdevperf_pid=92300 00:20:00.593 18:36:07 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:00.593 18:36:07 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.593 18:36:07 -- host/multicontroller.sh@47 -- # waitforlisten 92300 /var/tmp/bdevperf.sock 00:20:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.593 18:36:07 -- common/autotest_common.sh@819 -- # '[' -z 92300 ']' 00:20:00.593 18:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.593 18:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.593 18:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.593 18:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.593 18:36:07 -- common/autotest_common.sh@10 -- # set +x 00:20:01.526 18:36:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.526 18:36:08 -- common/autotest_common.sh@852 -- # return 0 00:20:01.526 18:36:08 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:01.526 18:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.526 18:36:08 -- common/autotest_common.sh@10 -- # set +x 00:20:01.788 NVMe0n1 00:20:01.788 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.788 18:36:09 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:01.788 18:36:09 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.788 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.788 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.788 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.788 1 00:20:01.788 18:36:09 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:01.788 18:36:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:01.788 18:36:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:01.788 18:36:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:01.788 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.788 18:36:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:01.788 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.788 18:36:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:01.788 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.788 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.788 2024/07/14 18:36:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.788 request: 00:20:01.788 { 00:20:01.788 "method": "bdev_nvme_attach_controller", 00:20:01.788 "params": { 00:20:01.788 "name": "NVMe0", 00:20:01.788 "trtype": "tcp", 00:20:01.788 "traddr": "10.0.0.2", 00:20:01.788 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:01.788 "hostaddr": "10.0.0.2", 00:20:01.788 "hostsvcid": "60000", 00:20:01.788 "adrfam": "ipv4", 00:20:01.788 "trsvcid": "4420", 00:20:01.788 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:01.788 } 00:20:01.788 } 00:20:01.788 Got JSON-RPC error response 00:20:01.788 GoRPCClient: error on JSON-RPC call 00:20:01.788 18:36:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:01.788 18:36:09 -- common/autotest_common.sh@643 -- # es=1 00:20:01.788 18:36:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:01.788 18:36:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:01.788 18:36:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:01.788 18:36:09 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:01.788 18:36:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:01.788 18:36:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:01.788 18:36:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.789 2024/07/14 18:36:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.789 request: 00:20:01.789 { 00:20:01.789 "method": "bdev_nvme_attach_controller", 00:20:01.789 "params": { 00:20:01.789 "name": "NVMe0", 00:20:01.789 "trtype": "tcp", 00:20:01.789 "traddr": "10.0.0.2", 00:20:01.789 "hostaddr": "10.0.0.2", 00:20:01.789 "hostsvcid": "60000", 00:20:01.789 "adrfam": "ipv4", 00:20:01.789 "trsvcid": "4420", 00:20:01.789 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:01.789 } 00:20:01.789 } 00:20:01.789 Got JSON-RPC error response 00:20:01.789 GoRPCClient: error on JSON-RPC call 00:20:01.789 18:36:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # es=1 00:20:01.789 18:36:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:01.789 18:36:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:01.789 18:36:09 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:01.789 18:36:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.789 2024/07/14 18:36:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:01.789 request: 00:20:01.789 { 00:20:01.789 "method": "bdev_nvme_attach_controller", 00:20:01.789 "params": { 00:20:01.789 "name": "NVMe0", 00:20:01.789 "trtype": "tcp", 00:20:01.789 "traddr": "10.0.0.2", 00:20:01.789 "hostaddr": "10.0.0.2", 00:20:01.789 "hostsvcid": "60000", 00:20:01.789 "adrfam": "ipv4", 00:20:01.789 "trsvcid": "4420", 00:20:01.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.789 "multipath": "disable" 00:20:01.789 } 00:20:01.789 } 00:20:01.789 Got JSON-RPC error response 00:20:01.789 GoRPCClient: error on JSON-RPC call 00:20:01.789 18:36:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # es=1 00:20:01.789 18:36:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:01.789 18:36:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:01.789 18:36:09 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:01.789 18:36:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:01.789 18:36:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:01.789 18:36:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:01.789 18:36:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.789 2024/07/14 18:36:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.789 request: 00:20:01.789 { 00:20:01.789 "method": "bdev_nvme_attach_controller", 00:20:01.789 "params": { 00:20:01.789 "name": "NVMe0", 00:20:01.789 "trtype": "tcp", 00:20:01.789 "traddr": "10.0.0.2", 00:20:01.789 "hostaddr": "10.0.0.2", 00:20:01.789 "hostsvcid": "60000", 00:20:01.789 "adrfam": "ipv4", 00:20:01.789 "trsvcid": "4420", 00:20:01.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.789 "multipath": "failover" 00:20:01.789 } 00:20:01.789 } 00:20:01.789 Got JSON-RPC error response 00:20:01.789 GoRPCClient: error on JSON-RPC call 00:20:01.789 18:36:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@643 -- # es=1 00:20:01.789 18:36:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:01.789 18:36:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:01.789 18:36:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:01.789 18:36:09 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.789 00:20:01.789 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.789 18:36:09 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.789 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.789 18:36:09 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:01.789 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.789 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.070 00:20:02.070 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.070 18:36:09 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.070 18:36:09 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:02.070 18:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.070 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.070 18:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.070 18:36:09 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:02.070 18:36:09 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:03.015 0 00:20:03.015 18:36:10 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:03.015 18:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.015 18:36:10 -- common/autotest_common.sh@10 -- # set +x 00:20:03.016 18:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.016 18:36:10 -- host/multicontroller.sh@100 -- # killprocess 92300 00:20:03.016 18:36:10 -- common/autotest_common.sh@926 -- # '[' -z 92300 ']' 00:20:03.016 18:36:10 -- common/autotest_common.sh@930 -- # kill -0 92300 00:20:03.016 18:36:10 -- common/autotest_common.sh@931 -- # uname 00:20:03.016 18:36:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.016 18:36:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92300 00:20:03.274 killing process with pid 92300 00:20:03.274 18:36:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.274 18:36:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.274 18:36:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92300' 00:20:03.274 18:36:10 -- common/autotest_common.sh@945 -- # kill 92300 00:20:03.274 18:36:10 -- common/autotest_common.sh@950 -- # wait 92300 00:20:03.533 18:36:10 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.533 18:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.533 18:36:10 -- common/autotest_common.sh@10 -- # set +x 00:20:03.533 18:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.533 18:36:10 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:03.533 18:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.533 18:36:10 -- common/autotest_common.sh@10 -- # set +x 00:20:03.533 18:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.533 18:36:10 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:03.533 18:36:10 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.533 18:36:10 -- common/autotest_common.sh@1597 -- # read -r file 00:20:03.533 18:36:10 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:03.533 18:36:10 -- common/autotest_common.sh@1596 -- # sort -u 00:20:03.533 18:36:10 -- common/autotest_common.sh@1598 -- # cat 00:20:03.533 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:03.533 [2024-07-14 18:36:07.989245] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:03.533 [2024-07-14 18:36:07.989365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92300 ] 00:20:03.533 [2024-07-14 18:36:08.131703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.533 [2024-07-14 18:36:08.200857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.533 [2024-07-14 18:36:09.242839] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 433dbf6c-3c76-4d06-9b53-9ce845b69caf already exists 00:20:03.533 [2024-07-14 18:36:09.242899] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:433dbf6c-3c76-4d06-9b53-9ce845b69caf alias for bdev NVMe1n1 00:20:03.533 [2024-07-14 18:36:09.242920] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:03.533 Running I/O for 1 seconds... 00:20:03.533 00:20:03.533 Latency(us) 00:20:03.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.533 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:03.533 NVMe0n1 : 1.00 21237.61 82.96 0.00 0.00 6019.15 2651.23 13762.56 00:20:03.534 =================================================================================================================== 00:20:03.534 Total : 21237.61 82.96 0.00 0.00 6019.15 2651.23 13762.56 00:20:03.534 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.534 00:20:03.534 Latency(us) 00:20:03.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.534 =================================================================================================================== 00:20:03.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.534 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:03.534 18:36:10 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.534 18:36:10 -- common/autotest_common.sh@1597 -- # read -r file 00:20:03.534 18:36:10 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:03.534 18:36:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:03.534 18:36:10 -- nvmf/common.sh@116 -- # sync 00:20:03.534 18:36:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:03.534 18:36:10 -- nvmf/common.sh@119 -- # set +e 00:20:03.534 18:36:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:03.534 18:36:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:03.534 rmmod nvme_tcp 00:20:03.534 rmmod nvme_fabrics 00:20:03.534 rmmod nvme_keyring 00:20:03.534 18:36:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:03.534 18:36:10 -- nvmf/common.sh@123 -- # set -e 00:20:03.534 18:36:10 -- nvmf/common.sh@124 -- # return 0 00:20:03.534 18:36:10 -- nvmf/common.sh@477 -- # '[' -n 92248 ']' 00:20:03.534 18:36:10 -- nvmf/common.sh@478 -- # killprocess 92248 00:20:03.534 18:36:10 -- common/autotest_common.sh@926 -- # '[' -z 92248 ']' 00:20:03.534 18:36:10 -- common/autotest_common.sh@930 -- # kill -0 92248 00:20:03.534 18:36:10 -- common/autotest_common.sh@931 -- # uname 00:20:03.534 18:36:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.534 18:36:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92248 00:20:03.534 killing process with pid 92248 00:20:03.534 18:36:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:03.534 18:36:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:03.534 18:36:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92248' 00:20:03.534 18:36:10 -- common/autotest_common.sh@945 -- # kill 92248 00:20:03.534 18:36:10 -- common/autotest_common.sh@950 -- # wait 92248 00:20:03.793 18:36:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:03.793 18:36:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:03.793 18:36:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:03.793 18:36:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.793 18:36:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:03.793 18:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.793 18:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.793 18:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.793 18:36:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:03.793 00:20:03.793 real 0m4.985s 00:20:03.793 user 0m15.775s 00:20:03.793 sys 0m1.090s 00:20:03.793 18:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.793 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:03.793 ************************************ 00:20:03.793 END TEST nvmf_multicontroller 00:20:03.793 ************************************ 00:20:04.052 18:36:11 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.052 18:36:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:04.052 18:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:04.052 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:04.052 ************************************ 00:20:04.052 START TEST nvmf_aer 00:20:04.052 ************************************ 00:20:04.052 18:36:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.052 * Looking for test storage... 00:20:04.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.052 18:36:11 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.052 18:36:11 -- nvmf/common.sh@7 -- # uname -s 00:20:04.052 18:36:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.052 18:36:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.052 18:36:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.052 18:36:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.052 18:36:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.052 18:36:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.052 18:36:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.052 18:36:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.052 18:36:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.052 18:36:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.052 18:36:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:04.052 18:36:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:04.052 18:36:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.052 18:36:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.052 18:36:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.052 18:36:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.052 18:36:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.052 18:36:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.052 18:36:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.052 18:36:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.052 18:36:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.052 18:36:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.052 18:36:11 -- paths/export.sh@5 -- # export PATH 00:20:04.052 18:36:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.052 18:36:11 -- nvmf/common.sh@46 -- # : 0 00:20:04.052 18:36:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.052 18:36:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.052 18:36:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.052 18:36:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.053 18:36:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.053 18:36:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.053 18:36:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.053 18:36:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.053 18:36:11 -- host/aer.sh@11 -- # nvmftestinit 00:20:04.053 18:36:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.053 18:36:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.053 18:36:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.053 18:36:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.053 18:36:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.053 18:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.053 18:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.053 18:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.053 18:36:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:04.053 18:36:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:04.053 18:36:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:04.053 18:36:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:04.053 18:36:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:04.053 18:36:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:04.053 18:36:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.053 18:36:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.053 18:36:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.053 18:36:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:04.053 18:36:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.053 18:36:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.053 18:36:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.053 18:36:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.053 18:36:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.053 18:36:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.053 18:36:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.053 18:36:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.053 18:36:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:04.053 18:36:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:04.053 Cannot find device "nvmf_tgt_br" 00:20:04.053 18:36:11 -- nvmf/common.sh@154 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.053 Cannot find device "nvmf_tgt_br2" 00:20:04.053 18:36:11 -- nvmf/common.sh@155 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:04.053 18:36:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:04.053 Cannot find device "nvmf_tgt_br" 00:20:04.053 18:36:11 -- nvmf/common.sh@157 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:04.053 Cannot find device "nvmf_tgt_br2" 00:20:04.053 18:36:11 -- nvmf/common.sh@158 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:04.053 18:36:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:04.053 18:36:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.053 18:36:11 -- nvmf/common.sh@161 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.053 18:36:11 -- nvmf/common.sh@162 -- # true 00:20:04.053 18:36:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.312 18:36:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.312 18:36:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.312 18:36:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.312 18:36:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.312 18:36:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.312 18:36:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.312 18:36:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.312 18:36:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.312 18:36:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:04.312 18:36:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:04.312 18:36:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:04.312 18:36:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:04.312 18:36:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.312 18:36:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.312 18:36:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.312 18:36:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:04.312 18:36:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:04.312 18:36:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.312 18:36:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.312 18:36:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.312 18:36:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.312 18:36:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.312 18:36:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:04.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:20:04.312 00:20:04.312 --- 10.0.0.2 ping statistics --- 00:20:04.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.312 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:04.312 18:36:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:04.312 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.312 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:04.312 00:20:04.312 --- 10.0.0.3 ping statistics --- 00:20:04.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.312 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:04.312 18:36:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:04.312 00:20:04.312 --- 10.0.0.1 ping statistics --- 00:20:04.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.312 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:04.312 18:36:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.312 18:36:11 -- nvmf/common.sh@421 -- # return 0 00:20:04.312 18:36:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:04.312 18:36:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.312 18:36:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:04.312 18:36:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:04.312 18:36:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.312 18:36:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:04.312 18:36:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:04.312 18:36:11 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:04.312 18:36:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:04.312 18:36:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:04.312 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 18:36:11 -- nvmf/common.sh@469 -- # nvmfpid=92553 00:20:04.312 18:36:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:04.312 18:36:11 -- nvmf/common.sh@470 -- # waitforlisten 92553 00:20:04.312 18:36:11 -- common/autotest_common.sh@819 -- # '[' -z 92553 ']' 00:20:04.312 18:36:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.312 18:36:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.312 18:36:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.312 18:36:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.313 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:04.313 [2024-07-14 18:36:11.726845] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:04.313 [2024-07-14 18:36:11.727093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.571 [2024-07-14 18:36:11.868626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.571 [2024-07-14 18:36:11.959324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:04.571 [2024-07-14 18:36:11.959711] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.571 [2024-07-14 18:36:11.959851] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.571 [2024-07-14 18:36:11.960036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.571 [2024-07-14 18:36:11.960335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.571 [2024-07-14 18:36:11.960425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.571 [2024-07-14 18:36:11.960501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.571 [2024-07-14 18:36:11.960514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.507 18:36:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.507 18:36:12 -- common/autotest_common.sh@852 -- # return 0 00:20:05.507 18:36:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:05.507 18:36:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 18:36:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.507 18:36:12 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 [2024-07-14 18:36:12.724275] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 Malloc0 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 [2024-07-14 18:36:12.794442] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:05.507 18:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.507 18:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:05.507 [2024-07-14 18:36:12.802149] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:05.507 [ 00:20:05.507 { 00:20:05.507 "allow_any_host": true, 00:20:05.507 "hosts": [], 00:20:05.507 "listen_addresses": [], 00:20:05.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.507 "subtype": "Discovery" 00:20:05.507 }, 00:20:05.507 { 00:20:05.507 "allow_any_host": true, 00:20:05.507 "hosts": [], 00:20:05.507 "listen_addresses": [ 00:20:05.507 { 00:20:05.507 "adrfam": "IPv4", 00:20:05.507 "traddr": "10.0.0.2", 00:20:05.507 "transport": "TCP", 00:20:05.507 "trsvcid": "4420", 00:20:05.507 "trtype": "TCP" 00:20:05.507 } 00:20:05.507 ], 00:20:05.507 "max_cntlid": 65519, 00:20:05.507 "max_namespaces": 2, 00:20:05.507 "min_cntlid": 1, 00:20:05.507 "model_number": "SPDK bdev Controller", 00:20:05.507 "namespaces": [ 00:20:05.507 { 00:20:05.507 "bdev_name": "Malloc0", 00:20:05.507 "name": "Malloc0", 00:20:05.507 "nguid": "5ECF1830577D41C4A4CFD64EC8D27D08", 00:20:05.507 "nsid": 1, 00:20:05.507 "uuid": "5ecf1830-577d-41c4-a4cf-d64ec8d27d08" 00:20:05.507 } 00:20:05.507 ], 00:20:05.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.507 "serial_number": "SPDK00000000000001", 00:20:05.507 "subtype": "NVMe" 00:20:05.507 } 00:20:05.507 ] 00:20:05.507 18:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.507 18:36:12 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:05.507 18:36:12 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:05.507 18:36:12 -- host/aer.sh@33 -- # aerpid=92607 00:20:05.507 18:36:12 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:05.507 18:36:12 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:05.507 18:36:12 -- common/autotest_common.sh@1244 -- # local i=0 00:20:05.507 18:36:12 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.507 18:36:12 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:20:05.507 18:36:12 -- common/autotest_common.sh@1247 -- # i=1 00:20:05.507 18:36:12 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:05.507 18:36:12 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.507 18:36:12 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:20:05.507 18:36:12 -- common/autotest_common.sh@1247 -- # i=2 00:20:05.507 18:36:12 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:05.766 18:36:13 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.766 18:36:13 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.766 18:36:13 -- common/autotest_common.sh@1255 -- # return 0 00:20:05.766 18:36:13 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:05.766 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.766 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.766 Malloc1 00:20:05.766 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.766 18:36:13 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:05.766 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.766 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.766 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.766 18:36:13 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:05.766 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.766 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.766 [ 00:20:05.766 { 00:20:05.766 "allow_any_host": true, 00:20:05.766 "hosts": [], 00:20:05.766 "listen_addresses": [], 00:20:05.766 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.766 "subtype": "Discovery" 00:20:05.766 }, 00:20:05.766 { 00:20:05.766 "allow_any_host": true, 00:20:05.766 "hosts": [], 00:20:05.766 "listen_addresses": [ 00:20:05.766 { 00:20:05.766 "adrfam": "IPv4", 00:20:05.766 "traddr": "10.0.0.2", 00:20:05.766 "transport": "TCP", 00:20:05.766 "trsvcid": "4420", 00:20:05.766 "trtype": "TCP" 00:20:05.766 } 00:20:05.766 ], 00:20:05.766 "max_cntlid": 65519, 00:20:05.766 "max_namespaces": 2, 00:20:05.766 "min_cntlid": 1, 00:20:05.766 "model_number": "SPDK bdev Controller", 00:20:05.766 Asynchronous Event Request test 00:20:05.766 Attaching to 10.0.0.2 00:20:05.766 Attached to 10.0.0.2 00:20:05.766 Registering asynchronous event callbacks... 00:20:05.766 Starting namespace attribute notice tests for all controllers... 00:20:05.766 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:05.766 aer_cb - Changed Namespace 00:20:05.766 Cleaning up... 00:20:05.766 "namespaces": [ 00:20:05.766 { 00:20:05.766 "bdev_name": "Malloc0", 00:20:05.766 "name": "Malloc0", 00:20:05.766 "nguid": "5ECF1830577D41C4A4CFD64EC8D27D08", 00:20:05.766 "nsid": 1, 00:20:05.766 "uuid": "5ecf1830-577d-41c4-a4cf-d64ec8d27d08" 00:20:05.766 }, 00:20:05.766 { 00:20:05.766 "bdev_name": "Malloc1", 00:20:05.766 "name": "Malloc1", 00:20:05.766 "nguid": "3CDA8B2C42BF43E9A6B2D4B376958D9A", 00:20:05.766 "nsid": 2, 00:20:05.766 "uuid": "3cda8b2c-42bf-43e9-a6b2-d4b376958d9a" 00:20:05.766 } 00:20:05.766 ], 00:20:05.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.767 "serial_number": "SPDK00000000000001", 00:20:05.767 "subtype": "NVMe" 00:20:05.767 } 00:20:05.767 ] 00:20:05.767 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.767 18:36:13 -- host/aer.sh@43 -- # wait 92607 00:20:05.767 18:36:13 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:05.767 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.767 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.767 18:36:13 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:05.767 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.767 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.767 18:36:13 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.767 18:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.767 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 18:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.025 18:36:13 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:06.025 18:36:13 -- host/aer.sh@51 -- # nvmftestfini 00:20:06.025 18:36:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:06.025 18:36:13 -- nvmf/common.sh@116 -- # sync 00:20:06.025 18:36:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:06.025 18:36:13 -- nvmf/common.sh@119 -- # set +e 00:20:06.025 18:36:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:06.025 18:36:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:06.025 rmmod nvme_tcp 00:20:06.025 rmmod nvme_fabrics 00:20:06.025 rmmod nvme_keyring 00:20:06.025 18:36:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:06.025 18:36:13 -- nvmf/common.sh@123 -- # set -e 00:20:06.025 18:36:13 -- nvmf/common.sh@124 -- # return 0 00:20:06.025 18:36:13 -- nvmf/common.sh@477 -- # '[' -n 92553 ']' 00:20:06.025 18:36:13 -- nvmf/common.sh@478 -- # killprocess 92553 00:20:06.025 18:36:13 -- common/autotest_common.sh@926 -- # '[' -z 92553 ']' 00:20:06.025 18:36:13 -- common/autotest_common.sh@930 -- # kill -0 92553 00:20:06.025 18:36:13 -- common/autotest_common.sh@931 -- # uname 00:20:06.025 18:36:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:06.025 18:36:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92553 00:20:06.025 18:36:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:06.025 18:36:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:06.025 18:36:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92553' 00:20:06.025 killing process with pid 92553 00:20:06.025 18:36:13 -- common/autotest_common.sh@945 -- # kill 92553 00:20:06.025 [2024-07-14 18:36:13.326342] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transpor 18:36:13 -- common/autotest_common.sh@950 -- # wait 92553 00:20:06.025 t is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:06.282 18:36:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:06.282 18:36:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:06.282 18:36:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:06.282 18:36:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.282 18:36:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:06.282 18:36:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.282 18:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.282 18:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.282 18:36:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:06.282 00:20:06.282 real 0m2.327s 00:20:06.282 user 0m6.449s 00:20:06.282 sys 0m0.662s 00:20:06.282 18:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.282 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:06.282 ************************************ 00:20:06.282 END TEST nvmf_aer 00:20:06.282 ************************************ 00:20:06.282 18:36:13 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:06.282 18:36:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:06.282 18:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:06.282 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:06.282 ************************************ 00:20:06.282 START TEST nvmf_async_init 00:20:06.282 ************************************ 00:20:06.282 18:36:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:06.282 * Looking for test storage... 00:20:06.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.282 18:36:13 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.282 18:36:13 -- nvmf/common.sh@7 -- # uname -s 00:20:06.282 18:36:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.282 18:36:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.282 18:36:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.540 18:36:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.540 18:36:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.540 18:36:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.540 18:36:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.540 18:36:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.540 18:36:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.540 18:36:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.540 18:36:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:06.540 18:36:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:06.540 18:36:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.540 18:36:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.540 18:36:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.540 18:36:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.540 18:36:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.540 18:36:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.540 18:36:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.540 18:36:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.540 18:36:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.541 18:36:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.541 18:36:13 -- paths/export.sh@5 -- # export PATH 00:20:06.541 18:36:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.541 18:36:13 -- nvmf/common.sh@46 -- # : 0 00:20:06.541 18:36:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:06.541 18:36:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:06.541 18:36:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:06.541 18:36:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.541 18:36:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.541 18:36:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:06.541 18:36:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:06.541 18:36:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:06.541 18:36:13 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:06.541 18:36:13 -- host/async_init.sh@14 -- # null_block_size=512 00:20:06.541 18:36:13 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:06.541 18:36:13 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:06.541 18:36:13 -- host/async_init.sh@20 -- # uuidgen 00:20:06.541 18:36:13 -- host/async_init.sh@20 -- # tr -d - 00:20:06.541 18:36:13 -- host/async_init.sh@20 -- # nguid=5eead46e8ef9425888ee0b7cfd2d37d1 00:20:06.541 18:36:13 -- host/async_init.sh@22 -- # nvmftestinit 00:20:06.541 18:36:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:06.541 18:36:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.541 18:36:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:06.541 18:36:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:06.541 18:36:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:06.541 18:36:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.541 18:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.541 18:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.541 18:36:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:06.541 18:36:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:06.541 18:36:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:06.541 18:36:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:06.541 18:36:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:06.541 18:36:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:06.541 18:36:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.541 18:36:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.541 18:36:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.541 18:36:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:06.541 18:36:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.541 18:36:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.541 18:36:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.541 18:36:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.541 18:36:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.541 18:36:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.541 18:36:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.541 18:36:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.541 18:36:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:06.541 18:36:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:06.541 Cannot find device "nvmf_tgt_br" 00:20:06.541 18:36:13 -- nvmf/common.sh@154 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.541 Cannot find device "nvmf_tgt_br2" 00:20:06.541 18:36:13 -- nvmf/common.sh@155 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:06.541 18:36:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:06.541 Cannot find device "nvmf_tgt_br" 00:20:06.541 18:36:13 -- nvmf/common.sh@157 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:06.541 Cannot find device "nvmf_tgt_br2" 00:20:06.541 18:36:13 -- nvmf/common.sh@158 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:06.541 18:36:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:06.541 18:36:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.541 18:36:13 -- nvmf/common.sh@161 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.541 18:36:13 -- nvmf/common.sh@162 -- # true 00:20:06.541 18:36:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.541 18:36:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.541 18:36:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.541 18:36:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.541 18:36:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.541 18:36:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.541 18:36:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.541 18:36:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.541 18:36:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.541 18:36:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:06.541 18:36:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:06.800 18:36:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:06.800 18:36:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:06.800 18:36:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.800 18:36:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.800 18:36:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.800 18:36:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:06.800 18:36:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:06.800 18:36:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.800 18:36:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.800 18:36:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.800 18:36:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.800 18:36:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.800 18:36:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:06.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:06.800 00:20:06.801 --- 10.0.0.2 ping statistics --- 00:20:06.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.801 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:06.801 18:36:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:06.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:06.801 00:20:06.801 --- 10.0.0.3 ping statistics --- 00:20:06.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.801 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:06.801 18:36:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:06.801 00:20:06.801 --- 10.0.0.1 ping statistics --- 00:20:06.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.801 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:06.801 18:36:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.801 18:36:14 -- nvmf/common.sh@421 -- # return 0 00:20:06.801 18:36:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.801 18:36:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.801 18:36:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.801 18:36:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.801 18:36:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.801 18:36:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.801 18:36:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.801 18:36:14 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:06.801 18:36:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.801 18:36:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.801 18:36:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.801 18:36:14 -- nvmf/common.sh@469 -- # nvmfpid=92775 00:20:06.801 18:36:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:06.801 18:36:14 -- nvmf/common.sh@470 -- # waitforlisten 92775 00:20:06.801 18:36:14 -- common/autotest_common.sh@819 -- # '[' -z 92775 ']' 00:20:06.801 18:36:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.801 18:36:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.801 18:36:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.801 18:36:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.801 18:36:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.801 [2024-07-14 18:36:14.151242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:06.801 [2024-07-14 18:36:14.151321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.059 [2024-07-14 18:36:14.289686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.059 [2024-07-14 18:36:14.366464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:07.059 [2024-07-14 18:36:14.366650] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.059 [2024-07-14 18:36:14.366664] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.059 [2024-07-14 18:36:14.366674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.059 [2024-07-14 18:36:14.366707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.995 18:36:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.995 18:36:15 -- common/autotest_common.sh@852 -- # return 0 00:20:07.995 18:36:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:07.995 18:36:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 18:36:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.995 18:36:15 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 [2024-07-14 18:36:15.165057] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 null0 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5eead46e8ef9425888ee0b7cfd2d37d1 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 [2024-07-14 18:36:15.205140] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.995 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.995 18:36:15 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:07.995 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.995 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 nvme0n1 00:20:08.254 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.254 18:36:15 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:08.254 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.254 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 [ 00:20:08.254 { 00:20:08.254 "aliases": [ 00:20:08.254 "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1" 00:20:08.254 ], 00:20:08.254 "assigned_rate_limits": { 00:20:08.254 "r_mbytes_per_sec": 0, 00:20:08.254 "rw_ios_per_sec": 0, 00:20:08.254 "rw_mbytes_per_sec": 0, 00:20:08.254 "w_mbytes_per_sec": 0 00:20:08.254 }, 00:20:08.254 "block_size": 512, 00:20:08.254 "claimed": false, 00:20:08.254 "driver_specific": { 00:20:08.254 "mp_policy": "active_passive", 00:20:08.254 "nvme": [ 00:20:08.254 { 00:20:08.254 "ctrlr_data": { 00:20:08.254 "ana_reporting": false, 00:20:08.254 "cntlid": 1, 00:20:08.254 "firmware_revision": "24.01.1", 00:20:08.254 "model_number": "SPDK bdev Controller", 00:20:08.254 "multi_ctrlr": true, 00:20:08.254 "oacs": { 00:20:08.254 "firmware": 0, 00:20:08.254 "format": 0, 00:20:08.254 "ns_manage": 0, 00:20:08.254 "security": 0 00:20:08.254 }, 00:20:08.254 "serial_number": "00000000000000000000", 00:20:08.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.254 "vendor_id": "0x8086" 00:20:08.254 }, 00:20:08.254 "ns_data": { 00:20:08.254 "can_share": true, 00:20:08.254 "id": 1 00:20:08.254 }, 00:20:08.254 "trid": { 00:20:08.254 "adrfam": "IPv4", 00:20:08.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.254 "traddr": "10.0.0.2", 00:20:08.254 "trsvcid": "4420", 00:20:08.254 "trtype": "TCP" 00:20:08.254 }, 00:20:08.254 "vs": { 00:20:08.254 "nvme_version": "1.3" 00:20:08.254 } 00:20:08.254 } 00:20:08.254 ] 00:20:08.254 }, 00:20:08.254 "name": "nvme0n1", 00:20:08.254 "num_blocks": 2097152, 00:20:08.254 "product_name": "NVMe disk", 00:20:08.254 "supported_io_types": { 00:20:08.254 "abort": true, 00:20:08.254 "compare": true, 00:20:08.254 "compare_and_write": true, 00:20:08.254 "flush": true, 00:20:08.254 "nvme_admin": true, 00:20:08.254 "nvme_io": true, 00:20:08.254 "read": true, 00:20:08.254 "reset": true, 00:20:08.254 "unmap": false, 00:20:08.254 "write": true, 00:20:08.254 "write_zeroes": true 00:20:08.254 }, 00:20:08.254 "uuid": "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1", 00:20:08.254 "zoned": false 00:20:08.254 } 00:20:08.254 ] 00:20:08.254 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.254 18:36:15 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:08.254 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.254 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 [2024-07-14 18:36:15.469346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.254 [2024-07-14 18:36:15.469438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66ff0 (9): Bad file descriptor 00:20:08.254 [2024-07-14 18:36:15.601708] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:08.254 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.254 18:36:15 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:08.254 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.254 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 [ 00:20:08.254 { 00:20:08.254 "aliases": [ 00:20:08.254 "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1" 00:20:08.254 ], 00:20:08.254 "assigned_rate_limits": { 00:20:08.254 "r_mbytes_per_sec": 0, 00:20:08.254 "rw_ios_per_sec": 0, 00:20:08.254 "rw_mbytes_per_sec": 0, 00:20:08.254 "w_mbytes_per_sec": 0 00:20:08.254 }, 00:20:08.254 "block_size": 512, 00:20:08.254 "claimed": false, 00:20:08.254 "driver_specific": { 00:20:08.254 "mp_policy": "active_passive", 00:20:08.254 "nvme": [ 00:20:08.254 { 00:20:08.254 "ctrlr_data": { 00:20:08.254 "ana_reporting": false, 00:20:08.254 "cntlid": 2, 00:20:08.254 "firmware_revision": "24.01.1", 00:20:08.254 "model_number": "SPDK bdev Controller", 00:20:08.254 "multi_ctrlr": true, 00:20:08.254 "oacs": { 00:20:08.254 "firmware": 0, 00:20:08.254 "format": 0, 00:20:08.254 "ns_manage": 0, 00:20:08.254 "security": 0 00:20:08.254 }, 00:20:08.255 "serial_number": "00000000000000000000", 00:20:08.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.255 "vendor_id": "0x8086" 00:20:08.255 }, 00:20:08.255 "ns_data": { 00:20:08.255 "can_share": true, 00:20:08.255 "id": 1 00:20:08.255 }, 00:20:08.255 "trid": { 00:20:08.255 "adrfam": "IPv4", 00:20:08.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.255 "traddr": "10.0.0.2", 00:20:08.255 "trsvcid": "4420", 00:20:08.255 "trtype": "TCP" 00:20:08.255 }, 00:20:08.255 "vs": { 00:20:08.255 "nvme_version": "1.3" 00:20:08.255 } 00:20:08.255 } 00:20:08.255 ] 00:20:08.255 }, 00:20:08.255 "name": "nvme0n1", 00:20:08.255 "num_blocks": 2097152, 00:20:08.255 "product_name": "NVMe disk", 00:20:08.255 "supported_io_types": { 00:20:08.255 "abort": true, 00:20:08.255 "compare": true, 00:20:08.255 "compare_and_write": true, 00:20:08.255 "flush": true, 00:20:08.255 "nvme_admin": true, 00:20:08.255 "nvme_io": true, 00:20:08.255 "read": true, 00:20:08.255 "reset": true, 00:20:08.255 "unmap": false, 00:20:08.255 "write": true, 00:20:08.255 "write_zeroes": true 00:20:08.255 }, 00:20:08.255 "uuid": "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1", 00:20:08.255 "zoned": false 00:20:08.255 } 00:20:08.255 ] 00:20:08.255 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.255 18:36:15 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.255 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.255 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.255 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.255 18:36:15 -- host/async_init.sh@53 -- # mktemp 00:20:08.255 18:36:15 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UpeOg62u3t 00:20:08.255 18:36:15 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:08.255 18:36:15 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UpeOg62u3t 00:20:08.255 18:36:15 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:08.255 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.255 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.255 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.255 18:36:15 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:08.255 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.255 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.255 [2024-07-14 18:36:15.673596] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.255 [2024-07-14 18:36:15.673755] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:08.255 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.514 18:36:15 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UpeOg62u3t 00:20:08.514 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.514 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.514 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.514 18:36:15 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UpeOg62u3t 00:20:08.514 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.514 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.514 [2024-07-14 18:36:15.689588] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.514 nvme0n1 00:20:08.514 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.514 18:36:15 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:08.514 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.514 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.514 [ 00:20:08.514 { 00:20:08.514 "aliases": [ 00:20:08.514 "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1" 00:20:08.514 ], 00:20:08.514 "assigned_rate_limits": { 00:20:08.514 "r_mbytes_per_sec": 0, 00:20:08.514 "rw_ios_per_sec": 0, 00:20:08.514 "rw_mbytes_per_sec": 0, 00:20:08.514 "w_mbytes_per_sec": 0 00:20:08.514 }, 00:20:08.514 "block_size": 512, 00:20:08.514 "claimed": false, 00:20:08.514 "driver_specific": { 00:20:08.514 "mp_policy": "active_passive", 00:20:08.514 "nvme": [ 00:20:08.514 { 00:20:08.514 "ctrlr_data": { 00:20:08.514 "ana_reporting": false, 00:20:08.514 "cntlid": 3, 00:20:08.514 "firmware_revision": "24.01.1", 00:20:08.514 "model_number": "SPDK bdev Controller", 00:20:08.514 "multi_ctrlr": true, 00:20:08.514 "oacs": { 00:20:08.514 "firmware": 0, 00:20:08.514 "format": 0, 00:20:08.514 "ns_manage": 0, 00:20:08.514 "security": 0 00:20:08.514 }, 00:20:08.514 "serial_number": "00000000000000000000", 00:20:08.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.514 "vendor_id": "0x8086" 00:20:08.514 }, 00:20:08.514 "ns_data": { 00:20:08.514 "can_share": true, 00:20:08.514 "id": 1 00:20:08.514 }, 00:20:08.514 "trid": { 00:20:08.514 "adrfam": "IPv4", 00:20:08.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.514 "traddr": "10.0.0.2", 00:20:08.514 "trsvcid": "4421", 00:20:08.514 "trtype": "TCP" 00:20:08.514 }, 00:20:08.514 "vs": { 00:20:08.514 "nvme_version": "1.3" 00:20:08.514 } 00:20:08.514 } 00:20:08.514 ] 00:20:08.514 }, 00:20:08.514 "name": "nvme0n1", 00:20:08.514 "num_blocks": 2097152, 00:20:08.514 "product_name": "NVMe disk", 00:20:08.514 "supported_io_types": { 00:20:08.514 "abort": true, 00:20:08.514 "compare": true, 00:20:08.514 "compare_and_write": true, 00:20:08.514 "flush": true, 00:20:08.514 "nvme_admin": true, 00:20:08.514 "nvme_io": true, 00:20:08.514 "read": true, 00:20:08.514 "reset": true, 00:20:08.514 "unmap": false, 00:20:08.514 "write": true, 00:20:08.514 "write_zeroes": true 00:20:08.514 }, 00:20:08.514 "uuid": "5eead46e-8ef9-4258-88ee-0b7cfd2d37d1", 00:20:08.514 "zoned": false 00:20:08.514 } 00:20:08.514 ] 00:20:08.514 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.514 18:36:15 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.514 18:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.514 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:08.514 18:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.514 18:36:15 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UpeOg62u3t 00:20:08.514 18:36:15 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:08.514 18:36:15 -- host/async_init.sh@78 -- # nvmftestfini 00:20:08.514 18:36:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.514 18:36:15 -- nvmf/common.sh@116 -- # sync 00:20:08.514 18:36:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.514 18:36:15 -- nvmf/common.sh@119 -- # set +e 00:20:08.514 18:36:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.514 18:36:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.514 rmmod nvme_tcp 00:20:08.514 rmmod nvme_fabrics 00:20:08.514 rmmod nvme_keyring 00:20:08.514 18:36:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.514 18:36:15 -- nvmf/common.sh@123 -- # set -e 00:20:08.514 18:36:15 -- nvmf/common.sh@124 -- # return 0 00:20:08.514 18:36:15 -- nvmf/common.sh@477 -- # '[' -n 92775 ']' 00:20:08.514 18:36:15 -- nvmf/common.sh@478 -- # killprocess 92775 00:20:08.514 18:36:15 -- common/autotest_common.sh@926 -- # '[' -z 92775 ']' 00:20:08.514 18:36:15 -- common/autotest_common.sh@930 -- # kill -0 92775 00:20:08.773 18:36:15 -- common/autotest_common.sh@931 -- # uname 00:20:08.773 18:36:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.773 18:36:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92775 00:20:08.773 18:36:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:08.773 18:36:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:08.773 killing process with pid 92775 00:20:08.773 18:36:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92775' 00:20:08.773 18:36:15 -- common/autotest_common.sh@945 -- # kill 92775 00:20:08.773 18:36:15 -- common/autotest_common.sh@950 -- # wait 92775 00:20:08.773 18:36:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.773 18:36:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.773 18:36:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.773 18:36:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.773 18:36:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.773 18:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.773 18:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.773 18:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.773 18:36:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.773 00:20:08.773 real 0m2.559s 00:20:08.773 user 0m2.360s 00:20:08.773 sys 0m0.636s 00:20:08.773 18:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.773 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:08.773 ************************************ 00:20:08.773 END TEST nvmf_async_init 00:20:08.773 ************************************ 00:20:09.032 18:36:16 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:09.032 18:36:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:09.032 18:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:09.032 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:09.032 ************************************ 00:20:09.032 START TEST dma 00:20:09.032 ************************************ 00:20:09.032 18:36:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:09.032 * Looking for test storage... 00:20:09.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:09.032 18:36:16 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.032 18:36:16 -- nvmf/common.sh@7 -- # uname -s 00:20:09.032 18:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.032 18:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.032 18:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.032 18:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.032 18:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.032 18:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.032 18:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.032 18:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.032 18:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.032 18:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.032 18:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:09.032 18:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:09.032 18:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.032 18:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.032 18:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:09.032 18:36:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.032 18:36:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.032 18:36:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.032 18:36:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.032 18:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.032 18:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.032 18:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.032 18:36:16 -- paths/export.sh@5 -- # export PATH 00:20:09.033 18:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.033 18:36:16 -- nvmf/common.sh@46 -- # : 0 00:20:09.033 18:36:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:09.033 18:36:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:09.033 18:36:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:09.033 18:36:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.033 18:36:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.033 18:36:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:09.033 18:36:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:09.033 18:36:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:09.033 18:36:16 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:09.033 18:36:16 -- host/dma.sh@13 -- # exit 0 00:20:09.033 00:20:09.033 real 0m0.099s 00:20:09.033 user 0m0.047s 00:20:09.033 sys 0m0.058s 00:20:09.033 18:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.033 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:09.033 ************************************ 00:20:09.033 END TEST dma 00:20:09.033 ************************************ 00:20:09.033 18:36:16 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:09.033 18:36:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:09.033 18:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:09.033 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:09.033 ************************************ 00:20:09.033 START TEST nvmf_identify 00:20:09.033 ************************************ 00:20:09.033 18:36:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:09.292 * Looking for test storage... 00:20:09.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:09.292 18:36:16 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.292 18:36:16 -- nvmf/common.sh@7 -- # uname -s 00:20:09.292 18:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.292 18:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.292 18:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.292 18:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.292 18:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.292 18:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.292 18:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.292 18:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.292 18:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.292 18:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:09.292 18:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:09.292 18:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.292 18:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.292 18:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:09.292 18:36:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.292 18:36:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.292 18:36:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.292 18:36:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.292 18:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.292 18:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.292 18:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.292 18:36:16 -- paths/export.sh@5 -- # export PATH 00:20:09.292 18:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.292 18:36:16 -- nvmf/common.sh@46 -- # : 0 00:20:09.292 18:36:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:09.292 18:36:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:09.292 18:36:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:09.292 18:36:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.292 18:36:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.292 18:36:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:09.292 18:36:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:09.292 18:36:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:09.292 18:36:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.292 18:36:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.292 18:36:16 -- host/identify.sh@14 -- # nvmftestinit 00:20:09.292 18:36:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:09.292 18:36:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.292 18:36:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:09.292 18:36:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:09.292 18:36:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:09.292 18:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.292 18:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.292 18:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.292 18:36:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:09.292 18:36:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:09.292 18:36:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.292 18:36:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.292 18:36:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:09.292 18:36:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:09.292 18:36:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:09.292 18:36:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:09.292 18:36:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:09.292 18:36:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.292 18:36:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:09.292 18:36:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:09.292 18:36:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:09.292 18:36:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:09.292 18:36:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:09.292 18:36:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:09.292 Cannot find device "nvmf_tgt_br" 00:20:09.292 18:36:16 -- nvmf/common.sh@154 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.292 Cannot find device "nvmf_tgt_br2" 00:20:09.292 18:36:16 -- nvmf/common.sh@155 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:09.292 18:36:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:09.292 Cannot find device "nvmf_tgt_br" 00:20:09.292 18:36:16 -- nvmf/common.sh@157 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:09.292 Cannot find device "nvmf_tgt_br2" 00:20:09.292 18:36:16 -- nvmf/common.sh@158 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:09.292 18:36:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:09.292 18:36:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.292 18:36:16 -- nvmf/common.sh@161 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.292 18:36:16 -- nvmf/common.sh@162 -- # true 00:20:09.292 18:36:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:09.292 18:36:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:09.292 18:36:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:09.292 18:36:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:09.292 18:36:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:09.292 18:36:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:09.292 18:36:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:09.552 18:36:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:09.552 18:36:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:09.552 18:36:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:09.552 18:36:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:09.552 18:36:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:09.552 18:36:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:09.552 18:36:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:09.552 18:36:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:09.552 18:36:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:09.552 18:36:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:09.552 18:36:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:09.552 18:36:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:09.552 18:36:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:09.552 18:36:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:09.552 18:36:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:09.552 18:36:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.552 18:36:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:09.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:20:09.552 00:20:09.552 --- 10.0.0.2 ping statistics --- 00:20:09.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.552 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:09.552 18:36:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:09.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:20:09.552 00:20:09.552 --- 10.0.0.3 ping statistics --- 00:20:09.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.552 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:09.552 18:36:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:09.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:09.552 00:20:09.552 --- 10.0.0.1 ping statistics --- 00:20:09.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.552 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:09.552 18:36:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.552 18:36:16 -- nvmf/common.sh@421 -- # return 0 00:20:09.552 18:36:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:09.552 18:36:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.552 18:36:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:09.552 18:36:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:09.552 18:36:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.552 18:36:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:09.552 18:36:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:09.552 18:36:16 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:09.552 18:36:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:09.552 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:09.552 18:36:16 -- host/identify.sh@19 -- # nvmfpid=93042 00:20:09.552 18:36:16 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:09.552 18:36:16 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.552 18:36:16 -- host/identify.sh@23 -- # waitforlisten 93042 00:20:09.552 18:36:16 -- common/autotest_common.sh@819 -- # '[' -z 93042 ']' 00:20:09.552 18:36:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.552 18:36:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.552 18:36:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.552 18:36:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.552 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:09.552 [2024-07-14 18:36:16.953966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:09.552 [2024-07-14 18:36:16.954060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.811 [2024-07-14 18:36:17.098619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.811 [2024-07-14 18:36:17.174452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.811 [2024-07-14 18:36:17.174849] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.811 [2024-07-14 18:36:17.174975] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.812 [2024-07-14 18:36:17.175140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.812 [2024-07-14 18:36:17.175400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.812 [2024-07-14 18:36:17.175451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.812 [2024-07-14 18:36:17.175548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.812 [2024-07-14 18:36:17.175549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.749 18:36:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.749 18:36:17 -- common/autotest_common.sh@852 -- # return 0 00:20:10.749 18:36:17 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.749 18:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:17 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 [2024-07-14 18:36:17.981279] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.749 18:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:10.749 18:36:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 18:36:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 Malloc0 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 [2024-07-14 18:36:18.088852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.749 18:36:18 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:10.749 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.749 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.749 [2024-07-14 18:36:18.104635] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:10.749 [ 00:20:10.749 { 00:20:10.749 "allow_any_host": true, 00:20:10.749 "hosts": [], 00:20:10.749 "listen_addresses": [ 00:20:10.749 { 00:20:10.749 "adrfam": "IPv4", 00:20:10.749 "traddr": "10.0.0.2", 00:20:10.749 "transport": "TCP", 00:20:10.749 "trsvcid": "4420", 00:20:10.749 "trtype": "TCP" 00:20:10.749 } 00:20:10.749 ], 00:20:10.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.749 "subtype": "Discovery" 00:20:10.749 }, 00:20:10.749 { 00:20:10.749 "allow_any_host": true, 00:20:10.749 "hosts": [], 00:20:10.749 "listen_addresses": [ 00:20:10.749 { 00:20:10.749 "adrfam": "IPv4", 00:20:10.749 "traddr": "10.0.0.2", 00:20:10.749 "transport": "TCP", 00:20:10.749 "trsvcid": "4420", 00:20:10.749 "trtype": "TCP" 00:20:10.749 } 00:20:10.749 ], 00:20:10.749 "max_cntlid": 65519, 00:20:10.749 "max_namespaces": 32, 00:20:10.749 "min_cntlid": 1, 00:20:10.749 "model_number": "SPDK bdev Controller", 00:20:10.749 "namespaces": [ 00:20:10.749 { 00:20:10.749 "bdev_name": "Malloc0", 00:20:10.749 "eui64": "ABCDEF0123456789", 00:20:10.749 "name": "Malloc0", 00:20:10.749 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:10.749 "nsid": 1, 00:20:10.749 "uuid": "2546b1c7-8e54-44ae-b0ba-f14348450fcd" 00:20:10.749 } 00:20:10.749 ], 00:20:10.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.749 "serial_number": "SPDK00000000000001", 00:20:10.749 "subtype": "NVMe" 00:20:10.749 } 00:20:10.749 ] 00:20:10.749 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.750 18:36:18 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:10.750 [2024-07-14 18:36:18.147029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:10.750 [2024-07-14 18:36:18.147073] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93096 ] 00:20:11.013 [2024-07-14 18:36:18.291731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:11.013 [2024-07-14 18:36:18.291795] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:11.013 [2024-07-14 18:36:18.291803] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:11.013 [2024-07-14 18:36:18.291815] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:11.013 [2024-07-14 18:36:18.291824] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:11.013 [2024-07-14 18:36:18.291967] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:11.013 [2024-07-14 18:36:18.292022] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd9ed70 0 00:20:11.013 [2024-07-14 18:36:18.297638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:11.013 [2024-07-14 18:36:18.297665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:11.013 [2024-07-14 18:36:18.297672] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:11.013 [2024-07-14 18:36:18.297676] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:11.013 [2024-07-14 18:36:18.297726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.297733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.297738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.297752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:11.013 [2024-07-14 18:36:18.297788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.305510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.305532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.305538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305544] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.305555] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:11.013 [2024-07-14 18:36:18.305564] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:11.013 [2024-07-14 18:36:18.305570] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:11.013 [2024-07-14 18:36:18.305587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.305618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.305662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.305747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.305755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.305759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.305770] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:11.013 [2024-07-14 18:36:18.305778] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:11.013 [2024-07-14 18:36:18.305786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305791] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.305802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.305824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.305888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.305895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.305899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.305910] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:11.013 [2024-07-14 18:36:18.305918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.305926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.305934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.305942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.305961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.306016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.306023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.306027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.306038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.306048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306057] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.306068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.306094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.306164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.306171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.306175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.306185] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:11.013 [2024-07-14 18:36:18.306191] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.306199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.306304] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:11.013 [2024-07-14 18:36:18.306310] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.306319] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306323] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.306335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.306361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.306424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.306431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.306435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.306445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:11.013 [2024-07-14 18:36:18.306455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.013 [2024-07-14 18:36:18.306471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.013 [2024-07-14 18:36:18.306516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.013 [2024-07-14 18:36:18.306589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.013 [2024-07-14 18:36:18.306607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.013 [2024-07-14 18:36:18.306611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.013 [2024-07-14 18:36:18.306615] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.013 [2024-07-14 18:36:18.306620] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:11.013 [2024-07-14 18:36:18.306626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.306634] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:11.014 [2024-07-14 18:36:18.306651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.306661] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.306678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.014 [2024-07-14 18:36:18.306699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.014 [2024-07-14 18:36:18.306807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.014 [2024-07-14 18:36:18.306814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.014 [2024-07-14 18:36:18.306818] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306823] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9ed70): datao=0, datal=4096, cccid=0 00:20:11.014 [2024-07-14 18:36:18.306828] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde85f0) on tqpair(0xd9ed70): expected_datao=0, payload_size=4096 00:20:11.014 [2024-07-14 18:36:18.306837] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306842] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.306867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.306870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306874] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.014 [2024-07-14 18:36:18.306884] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:11.014 [2024-07-14 18:36:18.306889] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:11.014 [2024-07-14 18:36:18.306894] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:11.014 [2024-07-14 18:36:18.306900] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:11.014 [2024-07-14 18:36:18.306905] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:11.014 [2024-07-14 18:36:18.306911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.306924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.306933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.306941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.306949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.014 [2024-07-14 18:36:18.306971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.014 [2024-07-14 18:36:18.307038] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.307045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.307049] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde85f0) on tqpair=0xd9ed70 00:20:11.014 [2024-07-14 18:36:18.307062] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307066] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307070] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.014 [2024-07-14 18:36:18.307084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.014 [2024-07-14 18:36:18.307104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.014 [2024-07-14 18:36:18.307131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.014 [2024-07-14 18:36:18.307150] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.307163] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:11.014 [2024-07-14 18:36:18.307171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.014 [2024-07-14 18:36:18.307208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde85f0, cid 0, qid 0 00:20:11.014 [2024-07-14 18:36:18.307215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8750, cid 1, qid 0 00:20:11.014 [2024-07-14 18:36:18.307228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde88b0, cid 2, qid 0 00:20:11.014 [2024-07-14 18:36:18.307233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.014 [2024-07-14 18:36:18.307237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8b70, cid 4, qid 0 00:20:11.014 [2024-07-14 18:36:18.307346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.307353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.307357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8b70) on tqpair=0xd9ed70 00:20:11.014 [2024-07-14 18:36:18.307367] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:11.014 [2024-07-14 18:36:18.307373] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:11.014 [2024-07-14 18:36:18.307384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.014 [2024-07-14 18:36:18.307420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8b70, cid 4, qid 0 00:20:11.014 [2024-07-14 18:36:18.307524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.014 [2024-07-14 18:36:18.307533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.014 [2024-07-14 18:36:18.307537] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307541] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9ed70): datao=0, datal=4096, cccid=4 00:20:11.014 [2024-07-14 18:36:18.307546] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde8b70) on tqpair(0xd9ed70): expected_datao=0, payload_size=4096 00:20:11.014 [2024-07-14 18:36:18.307555] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307559] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.307574] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.307578] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307583] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8b70) on tqpair=0xd9ed70 00:20:11.014 [2024-07-14 18:36:18.307616] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:11.014 [2024-07-14 18:36:18.307644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.014 [2024-07-14 18:36:18.307670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd9ed70) 00:20:11.014 [2024-07-14 18:36:18.307685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.014 [2024-07-14 18:36:18.307712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8b70, cid 4, qid 0 00:20:11.014 [2024-07-14 18:36:18.307720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8cd0, cid 5, qid 0 00:20:11.014 [2024-07-14 18:36:18.307839] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.014 [2024-07-14 18:36:18.307847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.014 [2024-07-14 18:36:18.307851] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307854] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9ed70): datao=0, datal=1024, cccid=4 00:20:11.014 [2024-07-14 18:36:18.307859] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde8b70) on tqpair(0xd9ed70): expected_datao=0, payload_size=1024 00:20:11.014 [2024-07-14 18:36:18.307867] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307871] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.307883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.307887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.014 [2024-07-14 18:36:18.307891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8cd0) on tqpair=0xd9ed70 00:20:11.014 [2024-07-14 18:36:18.351564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.014 [2024-07-14 18:36:18.351634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.014 [2024-07-14 18:36:18.351641] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8b70) on tqpair=0xd9ed70 00:20:11.015 [2024-07-14 18:36:18.351661] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351666] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9ed70) 00:20:11.015 [2024-07-14 18:36:18.351680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.015 [2024-07-14 18:36:18.351721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8b70, cid 4, qid 0 00:20:11.015 [2024-07-14 18:36:18.351818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.015 [2024-07-14 18:36:18.351825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.015 [2024-07-14 18:36:18.351829] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351833] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9ed70): datao=0, datal=3072, cccid=4 00:20:11.015 [2024-07-14 18:36:18.351839] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde8b70) on tqpair(0xd9ed70): expected_datao=0, payload_size=3072 00:20:11.015 [2024-07-14 18:36:18.351847] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351851] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.015 [2024-07-14 18:36:18.351867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.015 [2024-07-14 18:36:18.351870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8b70) on tqpair=0xd9ed70 00:20:11.015 [2024-07-14 18:36:18.351885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.351893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9ed70) 00:20:11.015 [2024-07-14 18:36:18.351901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.015 [2024-07-14 18:36:18.351941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8b70, cid 4, qid 0 00:20:11.015 [2024-07-14 18:36:18.352025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.015 [2024-07-14 18:36:18.352033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.015 [2024-07-14 18:36:18.352037] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.352041] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9ed70): datao=0, datal=8, cccid=4 00:20:11.015 [2024-07-14 18:36:18.352046] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde8b70) on tqpair(0xd9ed70): expected_datao=0, payload_size=8 00:20:11.015 [2024-07-14 18:36:18.352053] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.352058] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.395614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.015 [2024-07-14 18:36:18.395639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.015 [2024-07-14 18:36:18.395646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.015 [2024-07-14 18:36:18.395650] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8b70) on tqpair=0xd9ed70 00:20:11.015 ===================================================== 00:20:11.015 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:11.015 ===================================================== 00:20:11.015 Controller Capabilities/Features 00:20:11.015 ================================ 00:20:11.015 Vendor ID: 0000 00:20:11.015 Subsystem Vendor ID: 0000 00:20:11.015 Serial Number: .................... 00:20:11.015 Model Number: ........................................ 00:20:11.015 Firmware Version: 24.01.1 00:20:11.015 Recommended Arb Burst: 0 00:20:11.015 IEEE OUI Identifier: 00 00 00 00:20:11.015 Multi-path I/O 00:20:11.015 May have multiple subsystem ports: No 00:20:11.015 May have multiple controllers: No 00:20:11.015 Associated with SR-IOV VF: No 00:20:11.015 Max Data Transfer Size: 131072 00:20:11.015 Max Number of Namespaces: 0 00:20:11.015 Max Number of I/O Queues: 1024 00:20:11.015 NVMe Specification Version (VS): 1.3 00:20:11.015 NVMe Specification Version (Identify): 1.3 00:20:11.015 Maximum Queue Entries: 128 00:20:11.015 Contiguous Queues Required: Yes 00:20:11.015 Arbitration Mechanisms Supported 00:20:11.015 Weighted Round Robin: Not Supported 00:20:11.015 Vendor Specific: Not Supported 00:20:11.015 Reset Timeout: 15000 ms 00:20:11.015 Doorbell Stride: 4 bytes 00:20:11.015 NVM Subsystem Reset: Not Supported 00:20:11.015 Command Sets Supported 00:20:11.015 NVM Command Set: Supported 00:20:11.015 Boot Partition: Not Supported 00:20:11.015 Memory Page Size Minimum: 4096 bytes 00:20:11.015 Memory Page Size Maximum: 4096 bytes 00:20:11.015 Persistent Memory Region: Not Supported 00:20:11.015 Optional Asynchronous Events Supported 00:20:11.015 Namespace Attribute Notices: Not Supported 00:20:11.015 Firmware Activation Notices: Not Supported 00:20:11.015 ANA Change Notices: Not Supported 00:20:11.015 PLE Aggregate Log Change Notices: Not Supported 00:20:11.015 LBA Status Info Alert Notices: Not Supported 00:20:11.015 EGE Aggregate Log Change Notices: Not Supported 00:20:11.015 Normal NVM Subsystem Shutdown event: Not Supported 00:20:11.015 Zone Descriptor Change Notices: Not Supported 00:20:11.015 Discovery Log Change Notices: Supported 00:20:11.015 Controller Attributes 00:20:11.015 128-bit Host Identifier: Not Supported 00:20:11.015 Non-Operational Permissive Mode: Not Supported 00:20:11.015 NVM Sets: Not Supported 00:20:11.015 Read Recovery Levels: Not Supported 00:20:11.015 Endurance Groups: Not Supported 00:20:11.015 Predictable Latency Mode: Not Supported 00:20:11.015 Traffic Based Keep ALive: Not Supported 00:20:11.015 Namespace Granularity: Not Supported 00:20:11.015 SQ Associations: Not Supported 00:20:11.015 UUID List: Not Supported 00:20:11.015 Multi-Domain Subsystem: Not Supported 00:20:11.015 Fixed Capacity Management: Not Supported 00:20:11.015 Variable Capacity Management: Not Supported 00:20:11.015 Delete Endurance Group: Not Supported 00:20:11.015 Delete NVM Set: Not Supported 00:20:11.015 Extended LBA Formats Supported: Not Supported 00:20:11.015 Flexible Data Placement Supported: Not Supported 00:20:11.015 00:20:11.015 Controller Memory Buffer Support 00:20:11.015 ================================ 00:20:11.015 Supported: No 00:20:11.015 00:20:11.015 Persistent Memory Region Support 00:20:11.015 ================================ 00:20:11.015 Supported: No 00:20:11.015 00:20:11.015 Admin Command Set Attributes 00:20:11.015 ============================ 00:20:11.015 Security Send/Receive: Not Supported 00:20:11.015 Format NVM: Not Supported 00:20:11.015 Firmware Activate/Download: Not Supported 00:20:11.015 Namespace Management: Not Supported 00:20:11.015 Device Self-Test: Not Supported 00:20:11.015 Directives: Not Supported 00:20:11.015 NVMe-MI: Not Supported 00:20:11.015 Virtualization Management: Not Supported 00:20:11.015 Doorbell Buffer Config: Not Supported 00:20:11.015 Get LBA Status Capability: Not Supported 00:20:11.015 Command & Feature Lockdown Capability: Not Supported 00:20:11.015 Abort Command Limit: 1 00:20:11.015 Async Event Request Limit: 4 00:20:11.015 Number of Firmware Slots: N/A 00:20:11.015 Firmware Slot 1 Read-Only: N/A 00:20:11.015 Firmware Activation Without Reset: N/A 00:20:11.015 Multiple Update Detection Support: N/A 00:20:11.015 Firmware Update Granularity: No Information Provided 00:20:11.015 Per-Namespace SMART Log: No 00:20:11.015 Asymmetric Namespace Access Log Page: Not Supported 00:20:11.015 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:11.015 Command Effects Log Page: Not Supported 00:20:11.015 Get Log Page Extended Data: Supported 00:20:11.015 Telemetry Log Pages: Not Supported 00:20:11.015 Persistent Event Log Pages: Not Supported 00:20:11.015 Supported Log Pages Log Page: May Support 00:20:11.015 Commands Supported & Effects Log Page: Not Supported 00:20:11.015 Feature Identifiers & Effects Log Page:May Support 00:20:11.015 NVMe-MI Commands & Effects Log Page: May Support 00:20:11.015 Data Area 4 for Telemetry Log: Not Supported 00:20:11.015 Error Log Page Entries Supported: 128 00:20:11.015 Keep Alive: Not Supported 00:20:11.015 00:20:11.015 NVM Command Set Attributes 00:20:11.015 ========================== 00:20:11.015 Submission Queue Entry Size 00:20:11.015 Max: 1 00:20:11.015 Min: 1 00:20:11.015 Completion Queue Entry Size 00:20:11.015 Max: 1 00:20:11.015 Min: 1 00:20:11.015 Number of Namespaces: 0 00:20:11.015 Compare Command: Not Supported 00:20:11.015 Write Uncorrectable Command: Not Supported 00:20:11.015 Dataset Management Command: Not Supported 00:20:11.015 Write Zeroes Command: Not Supported 00:20:11.015 Set Features Save Field: Not Supported 00:20:11.015 Reservations: Not Supported 00:20:11.015 Timestamp: Not Supported 00:20:11.015 Copy: Not Supported 00:20:11.015 Volatile Write Cache: Not Present 00:20:11.015 Atomic Write Unit (Normal): 1 00:20:11.015 Atomic Write Unit (PFail): 1 00:20:11.015 Atomic Compare & Write Unit: 1 00:20:11.015 Fused Compare & Write: Supported 00:20:11.015 Scatter-Gather List 00:20:11.015 SGL Command Set: Supported 00:20:11.015 SGL Keyed: Supported 00:20:11.015 SGL Bit Bucket Descriptor: Not Supported 00:20:11.015 SGL Metadata Pointer: Not Supported 00:20:11.015 Oversized SGL: Not Supported 00:20:11.015 SGL Metadata Address: Not Supported 00:20:11.015 SGL Offset: Supported 00:20:11.015 Transport SGL Data Block: Not Supported 00:20:11.015 Replay Protected Memory Block: Not Supported 00:20:11.015 00:20:11.015 Firmware Slot Information 00:20:11.015 ========================= 00:20:11.015 Active slot: 0 00:20:11.015 00:20:11.015 00:20:11.015 Error Log 00:20:11.015 ========= 00:20:11.015 00:20:11.015 Active Namespaces 00:20:11.015 ================= 00:20:11.015 Discovery Log Page 00:20:11.015 ================== 00:20:11.016 Generation Counter: 2 00:20:11.016 Number of Records: 2 00:20:11.016 Record Format: 0 00:20:11.016 00:20:11.016 Discovery Log Entry 0 00:20:11.016 ---------------------- 00:20:11.016 Transport Type: 3 (TCP) 00:20:11.016 Address Family: 1 (IPv4) 00:20:11.016 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:11.016 Entry Flags: 00:20:11.016 Duplicate Returned Information: 1 00:20:11.016 Explicit Persistent Connection Support for Discovery: 1 00:20:11.016 Transport Requirements: 00:20:11.016 Secure Channel: Not Required 00:20:11.016 Port ID: 0 (0x0000) 00:20:11.016 Controller ID: 65535 (0xffff) 00:20:11.016 Admin Max SQ Size: 128 00:20:11.016 Transport Service Identifier: 4420 00:20:11.016 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:11.016 Transport Address: 10.0.0.2 00:20:11.016 Discovery Log Entry 1 00:20:11.016 ---------------------- 00:20:11.016 Transport Type: 3 (TCP) 00:20:11.016 Address Family: 1 (IPv4) 00:20:11.016 Subsystem Type: 2 (NVM Subsystem) 00:20:11.016 Entry Flags: 00:20:11.016 Duplicate Returned Information: 0 00:20:11.016 Explicit Persistent Connection Support for Discovery: 0 00:20:11.016 Transport Requirements: 00:20:11.016 Secure Channel: Not Required 00:20:11.016 Port ID: 0 (0x0000) 00:20:11.016 Controller ID: 65535 (0xffff) 00:20:11.016 Admin Max SQ Size: 128 00:20:11.016 Transport Service Identifier: 4420 00:20:11.016 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:11.016 Transport Address: 10.0.0.2 [2024-07-14 18:36:18.395790] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:11.016 [2024-07-14 18:36:18.395812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.016 [2024-07-14 18:36:18.395821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.016 [2024-07-14 18:36:18.395828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.016 [2024-07-14 18:36:18.395834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.016 [2024-07-14 18:36:18.395845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.395850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.395854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.395863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.395893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.395994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396162] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396173] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396177] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396183] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:11.016 [2024-07-14 18:36:18.396188] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:11.016 [2024-07-14 18:36:18.396198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396314] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396318] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.396871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.396885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.396894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.396902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.396920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.396992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.396998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.016 [2024-07-14 18:36:18.397002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.397006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.016 [2024-07-14 18:36:18.397016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.397021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.016 [2024-07-14 18:36:18.397024] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.016 [2024-07-14 18:36:18.397031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.016 [2024-07-14 18:36:18.397050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.016 [2024-07-14 18:36:18.397133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.016 [2024-07-14 18:36:18.397140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397269] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397273] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397401] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397813] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397817] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397863] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.397918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.397930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.397935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.397950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.397959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.397967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.397986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.398050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.398062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.398067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.398082] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.398099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.398118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.398208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.398224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.398229] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.398245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.017 [2024-07-14 18:36:18.398261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.017 [2024-07-14 18:36:18.398282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.017 [2024-07-14 18:36:18.398358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.017 [2024-07-14 18:36:18.398370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.017 [2024-07-14 18:36:18.398374] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.017 [2024-07-14 18:36:18.398379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.017 [2024-07-14 18:36:18.398389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.398406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.398426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.398505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.398515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.398519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.398535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.398551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.398585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.398660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.398667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.398671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.398686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.398703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.398722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.398793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.398800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.398804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.398819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.398835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.398853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.398928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.398935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.398939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.398953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.398962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.398969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.398988] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.399050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.399056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.399060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.399075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.399091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.399118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.399202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.399209] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.399213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.399228] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.399244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.399262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.399331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.399337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.399341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.399356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399365] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.399372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.399391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.399458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.399465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.399469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.399473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.399484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.403500] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.403517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9ed70) 00:20:11.018 [2024-07-14 18:36:18.403527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.018 [2024-07-14 18:36:18.403565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde8a10, cid 3, qid 0 00:20:11.018 [2024-07-14 18:36:18.403661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.018 [2024-07-14 18:36:18.403670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.018 [2024-07-14 18:36:18.403674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.018 [2024-07-14 18:36:18.403679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xde8a10) on tqpair=0xd9ed70 00:20:11.018 [2024-07-14 18:36:18.403688] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:11.018 00:20:11.018 18:36:18 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:11.300 [2024-07-14 18:36:18.440136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:11.300 [2024-07-14 18:36:18.440179] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93104 ] 00:20:11.300 [2024-07-14 18:36:18.582420] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:11.300 [2024-07-14 18:36:18.582483] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:11.300 [2024-07-14 18:36:18.585538] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:11.300 [2024-07-14 18:36:18.585556] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:11.300 [2024-07-14 18:36:18.585568] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:11.300 [2024-07-14 18:36:18.585700] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:11.300 [2024-07-14 18:36:18.585754] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfc4d70 0 00:20:11.300 [2024-07-14 18:36:18.600571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:11.300 [2024-07-14 18:36:18.600595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:11.300 [2024-07-14 18:36:18.600617] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:11.300 [2024-07-14 18:36:18.600621] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:11.300 [2024-07-14 18:36:18.600668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.600676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.600681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.300 [2024-07-14 18:36:18.600695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:11.300 [2024-07-14 18:36:18.600726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.300 [2024-07-14 18:36:18.608632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.300 [2024-07-14 18:36:18.608655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.300 [2024-07-14 18:36:18.608677] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.300 [2024-07-14 18:36:18.608694] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:11.300 [2024-07-14 18:36:18.608702] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:11.300 [2024-07-14 18:36:18.608709] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:11.300 [2024-07-14 18:36:18.608726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.300 [2024-07-14 18:36:18.608746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.300 [2024-07-14 18:36:18.608777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.300 [2024-07-14 18:36:18.608878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.300 [2024-07-14 18:36:18.608886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.300 [2024-07-14 18:36:18.608890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.300 [2024-07-14 18:36:18.608901] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:11.300 [2024-07-14 18:36:18.608909] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:11.300 [2024-07-14 18:36:18.608932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.608941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.300 [2024-07-14 18:36:18.608949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.300 [2024-07-14 18:36:18.608969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.300 [2024-07-14 18:36:18.609079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.300 [2024-07-14 18:36:18.609086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.300 [2024-07-14 18:36:18.609090] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.609094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.300 [2024-07-14 18:36:18.609101] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:11.300 [2024-07-14 18:36:18.609110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:11.300 [2024-07-14 18:36:18.609118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.609122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.300 [2024-07-14 18:36:18.609126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.300 [2024-07-14 18:36:18.609133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.300 [2024-07-14 18:36:18.609168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.300 [2024-07-14 18:36:18.609246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.300 [2024-07-14 18:36:18.609253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.300 [2024-07-14 18:36:18.609257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.609268] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:11.301 [2024-07-14 18:36:18.609279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.609295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.301 [2024-07-14 18:36:18.609314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.609393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.301 [2024-07-14 18:36:18.609400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.301 [2024-07-14 18:36:18.609404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609408] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.609414] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:11.301 [2024-07-14 18:36:18.609420] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:11.301 [2024-07-14 18:36:18.609428] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:11.301 [2024-07-14 18:36:18.609534] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:11.301 [2024-07-14 18:36:18.609539] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:11.301 [2024-07-14 18:36:18.609549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609558] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.609579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.301 [2024-07-14 18:36:18.609602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.609680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.301 [2024-07-14 18:36:18.609687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.301 [2024-07-14 18:36:18.609691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.609702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:11.301 [2024-07-14 18:36:18.609712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.609729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.301 [2024-07-14 18:36:18.609747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.609818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.301 [2024-07-14 18:36:18.609825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.301 [2024-07-14 18:36:18.609829] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.609839] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:11.301 [2024-07-14 18:36:18.609844] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.609853] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:11.301 [2024-07-14 18:36:18.609870] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.609881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.609890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.609897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.301 [2024-07-14 18:36:18.609933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.610067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.301 [2024-07-14 18:36:18.610074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.301 [2024-07-14 18:36:18.610078] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610083] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=4096, cccid=0 00:20:11.301 [2024-07-14 18:36:18.610087] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100e5f0) on tqpair(0xfc4d70): expected_datao=0, payload_size=4096 00:20:11.301 [2024-07-14 18:36:18.610096] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610101] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.301 [2024-07-14 18:36:18.610116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.301 [2024-07-14 18:36:18.610129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.610142] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:11.301 [2024-07-14 18:36:18.610148] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:11.301 [2024-07-14 18:36:18.610152] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:11.301 [2024-07-14 18:36:18.610157] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:11.301 [2024-07-14 18:36:18.610162] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:11.301 [2024-07-14 18:36:18.610178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.610192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.610200] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.301 [2024-07-14 18:36:18.610252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.610323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.301 [2024-07-14 18:36:18.610330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.301 [2024-07-14 18:36:18.610334] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100e5f0) on tqpair=0xfc4d70 00:20:11.301 [2024-07-14 18:36:18.610347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.301 [2024-07-14 18:36:18.610369] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610377] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.301 [2024-07-14 18:36:18.610389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.301 [2024-07-14 18:36:18.610409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.301 [2024-07-14 18:36:18.610429] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.610442] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:11.301 [2024-07-14 18:36:18.610450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.301 [2024-07-14 18:36:18.610458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.301 [2024-07-14 18:36:18.610465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.301 [2024-07-14 18:36:18.610488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e5f0, cid 0, qid 0 00:20:11.301 [2024-07-14 18:36:18.610495] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e750, cid 1, qid 0 00:20:11.301 [2024-07-14 18:36:18.610500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100e8b0, cid 2, qid 0 00:20:11.301 [2024-07-14 18:36:18.610505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.302 [2024-07-14 18:36:18.610510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.610670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.610678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.610682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.610694] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:11.302 [2024-07-14 18:36:18.610699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.610709] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.610720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.610728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.610744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.302 [2024-07-14 18:36:18.610766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.610840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.610847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.610851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.610918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.610929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.610937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.610945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.610953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.302 [2024-07-14 18:36:18.610973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.611073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.302 [2024-07-14 18:36:18.611080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.302 [2024-07-14 18:36:18.611084] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611088] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=4096, cccid=4 00:20:11.302 [2024-07-14 18:36:18.611093] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100eb70) on tqpair(0xfc4d70): expected_datao=0, payload_size=4096 00:20:11.302 [2024-07-14 18:36:18.611101] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.611121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.611125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.611146] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:11.302 [2024-07-14 18:36:18.611157] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.611192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.302 [2024-07-14 18:36:18.611213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.611315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.302 [2024-07-14 18:36:18.611323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.302 [2024-07-14 18:36:18.611327] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611330] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=4096, cccid=4 00:20:11.302 [2024-07-14 18:36:18.611335] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100eb70) on tqpair(0xfc4d70): expected_datao=0, payload_size=4096 00:20:11.302 [2024-07-14 18:36:18.611344] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611348] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.611363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.611367] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.611388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.611425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.302 [2024-07-14 18:36:18.611445] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.611542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.302 [2024-07-14 18:36:18.611551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.302 [2024-07-14 18:36:18.611555] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611559] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=4096, cccid=4 00:20:11.302 [2024-07-14 18:36:18.611564] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100eb70) on tqpair(0xfc4d70): expected_datao=0, payload_size=4096 00:20:11.302 [2024-07-14 18:36:18.611572] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611576] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.611603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.611607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611612] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.611622] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611632] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611644] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611662] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:11.302 [2024-07-14 18:36:18.611667] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:11.302 [2024-07-14 18:36:18.611673] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:11.302 [2024-07-14 18:36:18.611707] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.611730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.302 [2024-07-14 18:36:18.611738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc4d70) 00:20:11.302 [2024-07-14 18:36:18.611753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.302 [2024-07-14 18:36:18.611787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.302 [2024-07-14 18:36:18.611796] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ecd0, cid 5, qid 0 00:20:11.302 [2024-07-14 18:36:18.611922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.611930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.611934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.611946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.302 [2024-07-14 18:36:18.611953] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.302 [2024-07-14 18:36:18.611957] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611961] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ecd0) on tqpair=0xfc4d70 00:20:11.302 [2024-07-14 18:36:18.611973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.302 [2024-07-14 18:36:18.611978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.611982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.611989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ecd0, cid 5, qid 0 00:20:11.303 [2024-07-14 18:36:18.612080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.303 [2024-07-14 18:36:18.612093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.303 [2024-07-14 18:36:18.612097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612102] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ecd0) on tqpair=0xfc4d70 00:20:11.303 [2024-07-14 18:36:18.612114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ecd0, cid 5, qid 0 00:20:11.303 [2024-07-14 18:36:18.612224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.303 [2024-07-14 18:36:18.612232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.303 [2024-07-14 18:36:18.612235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ecd0) on tqpair=0xfc4d70 00:20:11.303 [2024-07-14 18:36:18.612251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612256] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612260] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ecd0, cid 5, qid 0 00:20:11.303 [2024-07-14 18:36:18.612354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.303 [2024-07-14 18:36:18.612361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.303 [2024-07-14 18:36:18.612365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ecd0) on tqpair=0xfc4d70 00:20:11.303 [2024-07-14 18:36:18.612415] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.612484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.612492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfc4d70) 00:20:11.303 [2024-07-14 18:36:18.612498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.303 [2024-07-14 18:36:18.616529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ecd0, cid 5, qid 0 00:20:11.303 [2024-07-14 18:36:18.616549] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100eb70, cid 4, qid 0 00:20:11.303 [2024-07-14 18:36:18.616556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ee30, cid 6, qid 0 00:20:11.303 [2024-07-14 18:36:18.616561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ef90, cid 7, qid 0 00:20:11.303 [2024-07-14 18:36:18.616576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.303 [2024-07-14 18:36:18.616584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.303 [2024-07-14 18:36:18.616588] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616592] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=8192, cccid=5 00:20:11.303 [2024-07-14 18:36:18.616597] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100ecd0) on tqpair(0xfc4d70): expected_datao=0, payload_size=8192 00:20:11.303 [2024-07-14 18:36:18.616606] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616610] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.303 [2024-07-14 18:36:18.616622] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.303 [2024-07-14 18:36:18.616626] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616630] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=512, cccid=4 00:20:11.303 [2024-07-14 18:36:18.616635] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100eb70) on tqpair(0xfc4d70): expected_datao=0, payload_size=512 00:20:11.303 [2024-07-14 18:36:18.616642] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616646] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.303 [2024-07-14 18:36:18.616658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.303 [2024-07-14 18:36:18.616662] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616666] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=512, cccid=6 00:20:11.303 [2024-07-14 18:36:18.616670] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100ee30) on tqpair(0xfc4d70): expected_datao=0, payload_size=512 00:20:11.303 [2024-07-14 18:36:18.616677] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616681] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:11.303 [2024-07-14 18:36:18.616693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:11.303 [2024-07-14 18:36:18.616697] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616701] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc4d70): datao=0, datal=4096, cccid=7 00:20:11.303 [2024-07-14 18:36:18.616705] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100ef90) on tqpair(0xfc4d70): expected_datao=0, payload_size=4096 00:20:11.303 [2024-07-14 18:36:18.616713] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616717] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.303 [2024-07-14 18:36:18.616729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.303 [2024-07-14 18:36:18.616733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.303 [2024-07-14 18:36:18.616737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ecd0) on tqpair=0xfc4d70 00:20:11.303 ===================================================== 00:20:11.303 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.303 ===================================================== 00:20:11.303 Controller Capabilities/Features 00:20:11.303 ================================ 00:20:11.303 Vendor ID: 8086 00:20:11.303 Subsystem Vendor ID: 8086 00:20:11.303 Serial Number: SPDK00000000000001 00:20:11.303 Model Number: SPDK bdev Controller 00:20:11.303 Firmware Version: 24.01.1 00:20:11.303 Recommended Arb Burst: 6 00:20:11.303 IEEE OUI Identifier: e4 d2 5c 00:20:11.303 Multi-path I/O 00:20:11.303 May have multiple subsystem ports: Yes 00:20:11.303 May have multiple controllers: Yes 00:20:11.303 Associated with SR-IOV VF: No 00:20:11.303 Max Data Transfer Size: 131072 00:20:11.303 Max Number of Namespaces: 32 00:20:11.303 Max Number of I/O Queues: 127 00:20:11.303 NVMe Specification Version (VS): 1.3 00:20:11.303 NVMe Specification Version (Identify): 1.3 00:20:11.303 Maximum Queue Entries: 128 00:20:11.303 Contiguous Queues Required: Yes 00:20:11.303 Arbitration Mechanisms Supported 00:20:11.303 Weighted Round Robin: Not Supported 00:20:11.303 Vendor Specific: Not Supported 00:20:11.303 Reset Timeout: 15000 ms 00:20:11.303 Doorbell Stride: 4 bytes 00:20:11.303 NVM Subsystem Reset: Not Supported 00:20:11.303 Command Sets Supported 00:20:11.303 NVM Command Set: Supported 00:20:11.303 Boot Partition: Not Supported 00:20:11.303 Memory Page Size Minimum: 4096 bytes 00:20:11.303 Memory Page Size Maximum: 4096 bytes 00:20:11.303 Persistent Memory Region: Not Supported 00:20:11.303 Optional Asynchronous Events Supported 00:20:11.303 Namespace Attribute Notices: Supported 00:20:11.303 Firmware Activation Notices: Not Supported 00:20:11.303 ANA Change Notices: Not Supported 00:20:11.303 PLE Aggregate Log Change Notices: Not Supported 00:20:11.303 LBA Status Info Alert Notices: Not Supported 00:20:11.303 EGE Aggregate Log Change Notices: Not Supported 00:20:11.303 Normal NVM Subsystem Shutdown event: Not Supported 00:20:11.303 Zone Descriptor Change Notices: Not Supported 00:20:11.303 Discovery Log Change Notices: Not Supported 00:20:11.303 Controller Attributes 00:20:11.303 128-bit Host Identifier: Supported 00:20:11.303 Non-Operational Permissive Mode: Not Supported 00:20:11.303 NVM Sets: Not Supported 00:20:11.303 Read Recovery Levels: Not Supported 00:20:11.303 Endurance Groups: Not Supported 00:20:11.303 Predictable Latency Mode: Not Supported 00:20:11.303 Traffic Based Keep ALive: Not Supported 00:20:11.303 Namespace Granularity: Not Supported 00:20:11.304 SQ Associations: Not Supported 00:20:11.304 UUID List: Not Supported 00:20:11.304 Multi-Domain Subsystem: Not Supported 00:20:11.304 Fixed Capacity Management: Not Supported 00:20:11.304 Variable Capacity Management: Not Supported 00:20:11.304 Delete Endurance Group: Not Supported 00:20:11.304 Delete NVM Set: Not Supported 00:20:11.304 Extended LBA Formats Supported: Not Supported 00:20:11.304 Flexible Data Placement Supported: Not Supported 00:20:11.304 00:20:11.304 Controller Memory Buffer Support 00:20:11.304 ================================ 00:20:11.304 Supported: No 00:20:11.304 00:20:11.304 Persistent Memory Region Support 00:20:11.304 ================================ 00:20:11.304 Supported: No 00:20:11.304 00:20:11.304 Admin Command Set Attributes 00:20:11.304 ============================ 00:20:11.304 Security Send/Receive: Not Supported 00:20:11.304 Format NVM: Not Supported 00:20:11.304 Firmware Activate/Download: Not Supported 00:20:11.304 Namespace Management: Not Supported 00:20:11.304 Device Self-Test: Not Supported 00:20:11.304 Directives: Not Supported 00:20:11.304 NVMe-MI: Not Supported 00:20:11.304 Virtualization Management: Not Supported 00:20:11.304 Doorbell Buffer Config: Not Supported 00:20:11.304 Get LBA Status Capability: Not Supported 00:20:11.304 Command & Feature Lockdown Capability: Not Supported 00:20:11.304 Abort Command Limit: 4 00:20:11.304 Async Event Request Limit: 4 00:20:11.304 Number of Firmware Slots: N/A 00:20:11.304 Firmware Slot 1 Read-Only: N/A 00:20:11.304 Firmware Activation Without Reset: [2024-07-14 18:36:18.616759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.304 [2024-07-14 18:36:18.616767] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.304 [2024-07-14 18:36:18.616771] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.616775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100eb70) on tqpair=0xfc4d70 00:20:11.304 [2024-07-14 18:36:18.616787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.304 [2024-07-14 18:36:18.616804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.304 [2024-07-14 18:36:18.616808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.616812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ee30) on tqpair=0xfc4d70 00:20:11.304 [2024-07-14 18:36:18.616821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.304 [2024-07-14 18:36:18.616827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.304 [2024-07-14 18:36:18.616831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.616835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ef90) on tqpair=0xfc4d70 00:20:11.304 N/A 00:20:11.304 Multiple Update Detection Support: N/A 00:20:11.304 Firmware Update Granularity: No Information Provided 00:20:11.304 Per-Namespace SMART Log: No 00:20:11.304 Asymmetric Namespace Access Log Page: Not Supported 00:20:11.304 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:11.304 Command Effects Log Page: Supported 00:20:11.304 Get Log Page Extended Data: Supported 00:20:11.304 Telemetry Log Pages: Not Supported 00:20:11.304 Persistent Event Log Pages: Not Supported 00:20:11.304 Supported Log Pages Log Page: May Support 00:20:11.304 Commands Supported & Effects Log Page: Not Supported 00:20:11.304 Feature Identifiers & Effects Log Page:May Support 00:20:11.304 NVMe-MI Commands & Effects Log Page: May Support 00:20:11.304 Data Area 4 for Telemetry Log: Not Supported 00:20:11.304 Error Log Page Entries Supported: 128 00:20:11.304 Keep Alive: Supported 00:20:11.304 Keep Alive Granularity: 10000 ms 00:20:11.304 00:20:11.304 NVM Command Set Attributes 00:20:11.304 ========================== 00:20:11.304 Submission Queue Entry Size 00:20:11.304 Max: 64 00:20:11.304 Min: 64 00:20:11.304 Completion Queue Entry Size 00:20:11.304 Max: 16 00:20:11.304 Min: 16 00:20:11.304 Number of Namespaces: 32 00:20:11.304 Compare Command: Supported 00:20:11.304 Write Uncorrectable Command: Not Supported 00:20:11.304 Dataset Management Command: Supported 00:20:11.304 Write Zeroes Command: Supported 00:20:11.304 Set Features Save Field: Not Supported 00:20:11.304 Reservations: Supported 00:20:11.304 Timestamp: Not Supported 00:20:11.304 Copy: Supported 00:20:11.304 Volatile Write Cache: Present 00:20:11.304 Atomic Write Unit (Normal): 1 00:20:11.304 Atomic Write Unit (PFail): 1 00:20:11.304 Atomic Compare & Write Unit: 1 00:20:11.304 Fused Compare & Write: Supported 00:20:11.304 Scatter-Gather List 00:20:11.304 SGL Command Set: Supported 00:20:11.304 SGL Keyed: Supported 00:20:11.304 SGL Bit Bucket Descriptor: Not Supported 00:20:11.304 SGL Metadata Pointer: Not Supported 00:20:11.304 Oversized SGL: Not Supported 00:20:11.304 SGL Metadata Address: Not Supported 00:20:11.304 SGL Offset: Supported 00:20:11.304 Transport SGL Data Block: Not Supported 00:20:11.304 Replay Protected Memory Block: Not Supported 00:20:11.304 00:20:11.304 Firmware Slot Information 00:20:11.304 ========================= 00:20:11.304 Active slot: 1 00:20:11.304 Slot 1 Firmware Revision: 24.01.1 00:20:11.304 00:20:11.304 00:20:11.304 Commands Supported and Effects 00:20:11.304 ============================== 00:20:11.304 Admin Commands 00:20:11.304 -------------- 00:20:11.304 Get Log Page (02h): Supported 00:20:11.304 Identify (06h): Supported 00:20:11.304 Abort (08h): Supported 00:20:11.304 Set Features (09h): Supported 00:20:11.304 Get Features (0Ah): Supported 00:20:11.304 Asynchronous Event Request (0Ch): Supported 00:20:11.304 Keep Alive (18h): Supported 00:20:11.304 I/O Commands 00:20:11.304 ------------ 00:20:11.304 Flush (00h): Supported LBA-Change 00:20:11.304 Write (01h): Supported LBA-Change 00:20:11.304 Read (02h): Supported 00:20:11.304 Compare (05h): Supported 00:20:11.304 Write Zeroes (08h): Supported LBA-Change 00:20:11.304 Dataset Management (09h): Supported LBA-Change 00:20:11.304 Copy (19h): Supported LBA-Change 00:20:11.304 Unknown (79h): Supported LBA-Change 00:20:11.304 Unknown (7Ah): Supported 00:20:11.304 00:20:11.304 Error Log 00:20:11.304 ========= 00:20:11.304 00:20:11.304 Arbitration 00:20:11.304 =========== 00:20:11.304 Arbitration Burst: 1 00:20:11.304 00:20:11.304 Power Management 00:20:11.304 ================ 00:20:11.304 Number of Power States: 1 00:20:11.304 Current Power State: Power State #0 00:20:11.304 Power State #0: 00:20:11.304 Max Power: 0.00 W 00:20:11.304 Non-Operational State: Operational 00:20:11.304 Entry Latency: Not Reported 00:20:11.304 Exit Latency: Not Reported 00:20:11.304 Relative Read Throughput: 0 00:20:11.304 Relative Read Latency: 0 00:20:11.304 Relative Write Throughput: 0 00:20:11.304 Relative Write Latency: 0 00:20:11.304 Idle Power: Not Reported 00:20:11.304 Active Power: Not Reported 00:20:11.304 Non-Operational Permissive Mode: Not Supported 00:20:11.304 00:20:11.304 Health Information 00:20:11.304 ================== 00:20:11.304 Critical Warnings: 00:20:11.304 Available Spare Space: OK 00:20:11.304 Temperature: OK 00:20:11.304 Device Reliability: OK 00:20:11.304 Read Only: No 00:20:11.304 Volatile Memory Backup: OK 00:20:11.304 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:11.304 Temperature Threshold: [2024-07-14 18:36:18.616950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.616958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.616962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfc4d70) 00:20:11.304 [2024-07-14 18:36:18.616971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.304 [2024-07-14 18:36:18.616999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ef90, cid 7, qid 0 00:20:11.304 [2024-07-14 18:36:18.617092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.304 [2024-07-14 18:36:18.617100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.304 [2024-07-14 18:36:18.617104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.304 [2024-07-14 18:36:18.617108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ef90) on tqpair=0xfc4d70 00:20:11.304 [2024-07-14 18:36:18.617146] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:11.304 [2024-07-14 18:36:18.617161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.304 [2024-07-14 18:36:18.617169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.305 [2024-07-14 18:36:18.617175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.305 [2024-07-14 18:36:18.617182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.305 [2024-07-14 18:36:18.617191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.617289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.617296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.617300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.617313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.617456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.617463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.617467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.617478] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:11.305 [2024-07-14 18:36:18.617483] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:11.305 [2024-07-14 18:36:18.617508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617515] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617519] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.617621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.617628] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.617632] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617636] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.617648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.617745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.617752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.617756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.617772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.617869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.617876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.617880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.617896] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.617904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.617912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.617930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.618004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.618011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.618015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.618030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.618046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.618064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.618131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.618138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.618142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.618157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.618174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.618191] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.618258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.618265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.618269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.618284] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.618301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.618319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.618385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.618392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.618396] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618401] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.618412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.305 [2024-07-14 18:36:18.618428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.305 [2024-07-14 18:36:18.618446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.305 [2024-07-14 18:36:18.618547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.305 [2024-07-14 18:36:18.618564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.305 [2024-07-14 18:36:18.618569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.305 [2024-07-14 18:36:18.618586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.305 [2024-07-14 18:36:18.618591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.618602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.618624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.618698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.618713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.618718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.618734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.618751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.618771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.618836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.618843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.618847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.618863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.618879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.618898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.618958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.618966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.618971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.618987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.618995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619155] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619274] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619405] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619498] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619505] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619616] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619721] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.619867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.619880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.619885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.619901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.619910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.619918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.619938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.620009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.620017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.306 [2024-07-14 18:36:18.620021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.620026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.306 [2024-07-14 18:36:18.620037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.620042] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.306 [2024-07-14 18:36:18.620046] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.306 [2024-07-14 18:36:18.620054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.306 [2024-07-14 18:36:18.620073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.306 [2024-07-14 18:36:18.620148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.306 [2024-07-14 18:36:18.620160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.307 [2024-07-14 18:36:18.620164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.307 [2024-07-14 18:36:18.620181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.307 [2024-07-14 18:36:18.620197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.307 [2024-07-14 18:36:18.620216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.307 [2024-07-14 18:36:18.620289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.307 [2024-07-14 18:36:18.620304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.307 [2024-07-14 18:36:18.620309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620313] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.307 [2024-07-14 18:36:18.620325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.307 [2024-07-14 18:36:18.620342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.307 [2024-07-14 18:36:18.620362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.307 [2024-07-14 18:36:18.620446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.307 [2024-07-14 18:36:18.620457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.307 [2024-07-14 18:36:18.620462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.307 [2024-07-14 18:36:18.620478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.620487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc4d70) 00:20:11.307 [2024-07-14 18:36:18.624521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.307 [2024-07-14 18:36:18.624556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100ea10, cid 3, qid 0 00:20:11.307 [2024-07-14 18:36:18.624631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:11.307 [2024-07-14 18:36:18.624639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:11.307 [2024-07-14 18:36:18.624643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:11.307 [2024-07-14 18:36:18.624648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x100ea10) on tqpair=0xfc4d70 00:20:11.307 [2024-07-14 18:36:18.624658] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:11.307 0 Kelvin (-273 Celsius) 00:20:11.307 Available Spare: 0% 00:20:11.307 Available Spare Threshold: 0% 00:20:11.307 Life Percentage Used: 0% 00:20:11.307 Data Units Read: 0 00:20:11.307 Data Units Written: 0 00:20:11.307 Host Read Commands: 0 00:20:11.307 Host Write Commands: 0 00:20:11.307 Controller Busy Time: 0 minutes 00:20:11.307 Power Cycles: 0 00:20:11.307 Power On Hours: 0 hours 00:20:11.307 Unsafe Shutdowns: 0 00:20:11.307 Unrecoverable Media Errors: 0 00:20:11.307 Lifetime Error Log Entries: 0 00:20:11.307 Warning Temperature Time: 0 minutes 00:20:11.307 Critical Temperature Time: 0 minutes 00:20:11.307 00:20:11.307 Number of Queues 00:20:11.307 ================ 00:20:11.307 Number of I/O Submission Queues: 127 00:20:11.307 Number of I/O Completion Queues: 127 00:20:11.307 00:20:11.307 Active Namespaces 00:20:11.307 ================= 00:20:11.307 Namespace ID:1 00:20:11.307 Error Recovery Timeout: Unlimited 00:20:11.307 Command Set Identifier: NVM (00h) 00:20:11.307 Deallocate: Supported 00:20:11.307 Deallocated/Unwritten Error: Not Supported 00:20:11.307 Deallocated Read Value: Unknown 00:20:11.307 Deallocate in Write Zeroes: Not Supported 00:20:11.307 Deallocated Guard Field: 0xFFFF 00:20:11.307 Flush: Supported 00:20:11.307 Reservation: Supported 00:20:11.307 Namespace Sharing Capabilities: Multiple Controllers 00:20:11.307 Size (in LBAs): 131072 (0GiB) 00:20:11.307 Capacity (in LBAs): 131072 (0GiB) 00:20:11.307 Utilization (in LBAs): 131072 (0GiB) 00:20:11.307 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:11.307 EUI64: ABCDEF0123456789 00:20:11.307 UUID: 2546b1c7-8e54-44ae-b0ba-f14348450fcd 00:20:11.307 Thin Provisioning: Not Supported 00:20:11.307 Per-NS Atomic Units: Yes 00:20:11.307 Atomic Boundary Size (Normal): 0 00:20:11.307 Atomic Boundary Size (PFail): 0 00:20:11.307 Atomic Boundary Offset: 0 00:20:11.307 Maximum Single Source Range Length: 65535 00:20:11.307 Maximum Copy Length: 65535 00:20:11.307 Maximum Source Range Count: 1 00:20:11.307 NGUID/EUI64 Never Reused: No 00:20:11.307 Namespace Write Protected: No 00:20:11.307 Number of LBA Formats: 1 00:20:11.307 Current LBA Format: LBA Format #00 00:20:11.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:11.307 00:20:11.307 18:36:18 -- host/identify.sh@51 -- # sync 00:20:11.307 18:36:18 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.307 18:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.307 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:20:11.307 18:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.307 18:36:18 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:11.307 18:36:18 -- host/identify.sh@56 -- # nvmftestfini 00:20:11.307 18:36:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:11.307 18:36:18 -- nvmf/common.sh@116 -- # sync 00:20:11.566 18:36:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:11.566 18:36:18 -- nvmf/common.sh@119 -- # set +e 00:20:11.566 18:36:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:11.566 18:36:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:11.566 rmmod nvme_tcp 00:20:11.566 rmmod nvme_fabrics 00:20:11.566 rmmod nvme_keyring 00:20:11.566 18:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:11.566 18:36:18 -- nvmf/common.sh@123 -- # set -e 00:20:11.566 18:36:18 -- nvmf/common.sh@124 -- # return 0 00:20:11.566 18:36:18 -- nvmf/common.sh@477 -- # '[' -n 93042 ']' 00:20:11.566 18:36:18 -- nvmf/common.sh@478 -- # killprocess 93042 00:20:11.566 18:36:18 -- common/autotest_common.sh@926 -- # '[' -z 93042 ']' 00:20:11.566 18:36:18 -- common/autotest_common.sh@930 -- # kill -0 93042 00:20:11.566 18:36:18 -- common/autotest_common.sh@931 -- # uname 00:20:11.566 18:36:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:11.566 18:36:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93042 00:20:11.566 killing process with pid 93042 00:20:11.566 18:36:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:11.566 18:36:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:11.566 18:36:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93042' 00:20:11.566 18:36:18 -- common/autotest_common.sh@945 -- # kill 93042 00:20:11.566 [2024-07-14 18:36:18.797261] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:11.566 18:36:18 -- common/autotest_common.sh@950 -- # wait 93042 00:20:11.825 18:36:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:11.825 18:36:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:11.825 18:36:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:11.825 18:36:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.825 18:36:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:11.825 18:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.825 18:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.825 18:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.825 18:36:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:11.825 00:20:11.825 real 0m2.681s 00:20:11.825 user 0m7.641s 00:20:11.825 sys 0m0.681s 00:20:11.825 18:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.825 18:36:19 -- common/autotest_common.sh@10 -- # set +x 00:20:11.825 ************************************ 00:20:11.825 END TEST nvmf_identify 00:20:11.825 ************************************ 00:20:11.825 18:36:19 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:11.825 18:36:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:11.825 18:36:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.825 18:36:19 -- common/autotest_common.sh@10 -- # set +x 00:20:11.825 ************************************ 00:20:11.825 START TEST nvmf_perf 00:20:11.825 ************************************ 00:20:11.825 18:36:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:11.825 * Looking for test storage... 00:20:11.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:11.825 18:36:19 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.825 18:36:19 -- nvmf/common.sh@7 -- # uname -s 00:20:11.825 18:36:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.825 18:36:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.825 18:36:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.825 18:36:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.825 18:36:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.825 18:36:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.825 18:36:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.825 18:36:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.825 18:36:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.825 18:36:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.826 18:36:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:20:11.826 18:36:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:20:11.826 18:36:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.826 18:36:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.826 18:36:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.826 18:36:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.826 18:36:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.826 18:36:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.826 18:36:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.826 18:36:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.826 18:36:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.826 18:36:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.826 18:36:19 -- paths/export.sh@5 -- # export PATH 00:20:11.826 18:36:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.826 18:36:19 -- nvmf/common.sh@46 -- # : 0 00:20:11.826 18:36:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:11.826 18:36:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:11.826 18:36:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:11.826 18:36:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.826 18:36:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.826 18:36:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:11.826 18:36:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:11.826 18:36:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:11.826 18:36:19 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:11.826 18:36:19 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:11.826 18:36:19 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.826 18:36:19 -- host/perf.sh@17 -- # nvmftestinit 00:20:11.826 18:36:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:11.826 18:36:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.826 18:36:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:11.826 18:36:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:11.826 18:36:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:11.826 18:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.826 18:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.826 18:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.826 18:36:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:11.826 18:36:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:11.826 18:36:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:11.826 18:36:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:11.826 18:36:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:11.826 18:36:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:11.826 18:36:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.826 18:36:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.826 18:36:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:11.826 18:36:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:11.826 18:36:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.826 18:36:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.826 18:36:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.826 18:36:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.826 18:36:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.826 18:36:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.826 18:36:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.826 18:36:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.826 18:36:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:11.826 18:36:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:11.826 Cannot find device "nvmf_tgt_br" 00:20:11.826 18:36:19 -- nvmf/common.sh@154 -- # true 00:20:11.826 18:36:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.826 Cannot find device "nvmf_tgt_br2" 00:20:11.826 18:36:19 -- nvmf/common.sh@155 -- # true 00:20:11.826 18:36:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:12.085 18:36:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:12.085 Cannot find device "nvmf_tgt_br" 00:20:12.085 18:36:19 -- nvmf/common.sh@157 -- # true 00:20:12.085 18:36:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:12.085 Cannot find device "nvmf_tgt_br2" 00:20:12.085 18:36:19 -- nvmf/common.sh@158 -- # true 00:20:12.085 18:36:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:12.085 18:36:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:12.085 18:36:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.085 18:36:19 -- nvmf/common.sh@161 -- # true 00:20:12.085 18:36:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.085 18:36:19 -- nvmf/common.sh@162 -- # true 00:20:12.085 18:36:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.085 18:36:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.085 18:36:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.085 18:36:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.085 18:36:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.085 18:36:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.085 18:36:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.085 18:36:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.085 18:36:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.085 18:36:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:12.085 18:36:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:12.085 18:36:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:12.085 18:36:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:12.085 18:36:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.085 18:36:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.085 18:36:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.085 18:36:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:12.085 18:36:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:12.085 18:36:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.085 18:36:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.344 18:36:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.344 18:36:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.344 18:36:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.344 18:36:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:12.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:12.344 00:20:12.344 --- 10.0.0.2 ping statistics --- 00:20:12.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.344 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:12.344 18:36:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:12.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:12.344 00:20:12.344 --- 10.0.0.3 ping statistics --- 00:20:12.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.344 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:12.344 18:36:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:12.344 00:20:12.344 --- 10.0.0.1 ping statistics --- 00:20:12.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.344 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:12.344 18:36:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.344 18:36:19 -- nvmf/common.sh@421 -- # return 0 00:20:12.344 18:36:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:12.344 18:36:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.344 18:36:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:12.344 18:36:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:12.344 18:36:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.344 18:36:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:12.344 18:36:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:12.344 18:36:19 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:12.344 18:36:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:12.344 18:36:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:12.344 18:36:19 -- common/autotest_common.sh@10 -- # set +x 00:20:12.344 18:36:19 -- nvmf/common.sh@469 -- # nvmfpid=93267 00:20:12.344 18:36:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:12.344 18:36:19 -- nvmf/common.sh@470 -- # waitforlisten 93267 00:20:12.344 18:36:19 -- common/autotest_common.sh@819 -- # '[' -z 93267 ']' 00:20:12.344 18:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.344 18:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:12.344 18:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.344 18:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:12.344 18:36:19 -- common/autotest_common.sh@10 -- # set +x 00:20:12.344 [2024-07-14 18:36:19.627050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:12.344 [2024-07-14 18:36:19.627146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.344 [2024-07-14 18:36:19.766675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.603 [2024-07-14 18:36:19.843358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:12.603 [2024-07-14 18:36:19.843495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.603 [2024-07-14 18:36:19.843550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.603 [2024-07-14 18:36:19.843560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.603 [2024-07-14 18:36:19.843670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.603 [2024-07-14 18:36:19.843974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.603 [2024-07-14 18:36:19.844679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.603 [2024-07-14 18:36:19.844688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.538 18:36:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:13.538 18:36:20 -- common/autotest_common.sh@852 -- # return 0 00:20:13.538 18:36:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:13.538 18:36:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:13.538 18:36:20 -- common/autotest_common.sh@10 -- # set +x 00:20:13.538 18:36:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.538 18:36:20 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:13.538 18:36:20 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:13.796 18:36:21 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:13.796 18:36:21 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:14.054 18:36:21 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:14.054 18:36:21 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.313 18:36:21 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:14.313 18:36:21 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:14.313 18:36:21 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:14.313 18:36:21 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:14.313 18:36:21 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.313 [2024-07-14 18:36:21.733873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.570 18:36:21 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.571 18:36:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.571 18:36:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.842 18:36:22 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.842 18:36:22 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:15.112 18:36:22 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.369 [2024-07-14 18:36:22.652020] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.369 18:36:22 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.628 18:36:22 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:15.628 18:36:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:15.628 18:36:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:15.628 18:36:22 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:17.005 Initializing NVMe Controllers 00:20:17.005 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:17.005 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:17.005 Initialization complete. Launching workers. 00:20:17.005 ======================================================== 00:20:17.005 Latency(us) 00:20:17.005 Device Information : IOPS MiB/s Average min max 00:20:17.005 PCIE (0000:00:06.0) NSID 1 from core 0: 22559.98 88.12 1423.84 402.23 8226.41 00:20:17.005 ======================================================== 00:20:17.005 Total : 22559.98 88.12 1423.84 402.23 8226.41 00:20:17.005 00:20:17.005 18:36:24 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:17.938 Initializing NVMe Controllers 00:20:17.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.938 Initialization complete. Launching workers. 00:20:17.938 ======================================================== 00:20:17.938 Latency(us) 00:20:17.938 Device Information : IOPS MiB/s Average min max 00:20:17.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2997.82 11.71 333.23 109.03 7229.40 00:20:17.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.99 0.50 7936.36 4934.38 12053.18 00:20:17.938 ======================================================== 00:20:17.939 Total : 3124.81 12.21 642.22 109.03 12053.18 00:20:17.939 00:20:17.939 18:36:25 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.313 Initializing NVMe Controllers 00:20:19.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:19.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:19.313 Initialization complete. Launching workers. 00:20:19.313 ======================================================== 00:20:19.313 Latency(us) 00:20:19.313 Device Information : IOPS MiB/s Average min max 00:20:19.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8675.61 33.89 3689.50 694.47 7988.58 00:20:19.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2690.16 10.51 11967.10 6870.92 20199.62 00:20:19.313 ======================================================== 00:20:19.313 Total : 11365.77 44.40 5648.72 694.47 20199.62 00:20:19.313 00:20:19.313 18:36:26 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:19.313 18:36:26 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:21.841 Initializing NVMe Controllers 00:20:21.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.841 Controller IO queue size 128, less than required. 00:20:21.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.841 Controller IO queue size 128, less than required. 00:20:21.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:21.841 Initialization complete. Launching workers. 00:20:21.841 ======================================================== 00:20:21.841 Latency(us) 00:20:21.841 Device Information : IOPS MiB/s Average min max 00:20:21.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1160.12 290.03 112692.76 71452.45 177305.39 00:20:21.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.77 149.44 223222.57 143550.94 360777.74 00:20:21.841 ======================================================== 00:20:21.841 Total : 1757.89 439.47 150278.54 71452.45 360777.74 00:20:21.841 00:20:21.841 18:36:29 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:22.099 No valid NVMe controllers or AIO or URING devices found 00:20:22.099 Initializing NVMe Controllers 00:20:22.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.099 Controller IO queue size 128, less than required. 00:20:22.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.099 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:22.099 Controller IO queue size 128, less than required. 00:20:22.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.099 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:22.099 WARNING: Some requested NVMe devices were skipped 00:20:22.099 18:36:29 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:24.629 Initializing NVMe Controllers 00:20:24.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.629 Controller IO queue size 128, less than required. 00:20:24.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.629 Controller IO queue size 128, less than required. 00:20:24.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.629 Initialization complete. Launching workers. 00:20:24.629 00:20:24.629 ==================== 00:20:24.629 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:24.629 TCP transport: 00:20:24.629 polls: 8329 00:20:24.629 idle_polls: 5316 00:20:24.629 sock_completions: 3013 00:20:24.629 nvme_completions: 4073 00:20:24.629 submitted_requests: 6253 00:20:24.629 queued_requests: 1 00:20:24.629 00:20:24.629 ==================== 00:20:24.629 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:24.629 TCP transport: 00:20:24.629 polls: 8551 00:20:24.629 idle_polls: 5578 00:20:24.629 sock_completions: 2973 00:20:24.629 nvme_completions: 5971 00:20:24.629 submitted_requests: 9149 00:20:24.629 queued_requests: 1 00:20:24.629 ======================================================== 00:20:24.629 Latency(us) 00:20:24.629 Device Information : IOPS MiB/s Average min max 00:20:24.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1079.50 269.88 121883.54 82540.44 203799.47 00:20:24.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1552.90 388.23 83027.06 43626.56 129602.11 00:20:24.629 ======================================================== 00:20:24.629 Total : 2632.40 658.10 98961.38 43626.56 203799.47 00:20:24.629 00:20:24.629 18:36:31 -- host/perf.sh@66 -- # sync 00:20:24.629 18:36:31 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.887 18:36:32 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:24.887 18:36:32 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:24.887 18:36:32 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:25.146 18:36:32 -- host/perf.sh@72 -- # ls_guid=88af4036-8c3a-4678-9535-31988703d8b6 00:20:25.146 18:36:32 -- host/perf.sh@73 -- # get_lvs_free_mb 88af4036-8c3a-4678-9535-31988703d8b6 00:20:25.146 18:36:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=88af4036-8c3a-4678-9535-31988703d8b6 00:20:25.146 18:36:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:25.146 18:36:32 -- common/autotest_common.sh@1345 -- # local fc 00:20:25.146 18:36:32 -- common/autotest_common.sh@1346 -- # local cs 00:20:25.146 18:36:32 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:25.404 18:36:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:25.404 { 00:20:25.404 "base_bdev": "Nvme0n1", 00:20:25.404 "block_size": 4096, 00:20:25.404 "cluster_size": 4194304, 00:20:25.404 "free_clusters": 1278, 00:20:25.404 "name": "lvs_0", 00:20:25.404 "total_data_clusters": 1278, 00:20:25.404 "uuid": "88af4036-8c3a-4678-9535-31988703d8b6" 00:20:25.404 } 00:20:25.404 ]' 00:20:25.404 18:36:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="88af4036-8c3a-4678-9535-31988703d8b6") .free_clusters' 00:20:25.662 18:36:32 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:25.662 18:36:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="88af4036-8c3a-4678-9535-31988703d8b6") .cluster_size' 00:20:25.662 5112 00:20:25.662 18:36:32 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:25.662 18:36:32 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:25.662 18:36:32 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:25.662 18:36:32 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:25.662 18:36:32 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 88af4036-8c3a-4678-9535-31988703d8b6 lbd_0 5112 00:20:25.920 18:36:33 -- host/perf.sh@80 -- # lb_guid=fe1a5c99-8fa4-44d4-8a1b-404f7573c920 00:20:25.920 18:36:33 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore fe1a5c99-8fa4-44d4-8a1b-404f7573c920 lvs_n_0 00:20:26.177 18:36:33 -- host/perf.sh@83 -- # ls_nested_guid=ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae 00:20:26.178 18:36:33 -- host/perf.sh@84 -- # get_lvs_free_mb ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae 00:20:26.178 18:36:33 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae 00:20:26.178 18:36:33 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:26.178 18:36:33 -- common/autotest_common.sh@1345 -- # local fc 00:20:26.178 18:36:33 -- common/autotest_common.sh@1346 -- # local cs 00:20:26.178 18:36:33 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:26.435 18:36:33 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:26.435 { 00:20:26.435 "base_bdev": "Nvme0n1", 00:20:26.435 "block_size": 4096, 00:20:26.435 "cluster_size": 4194304, 00:20:26.435 "free_clusters": 0, 00:20:26.435 "name": "lvs_0", 00:20:26.435 "total_data_clusters": 1278, 00:20:26.435 "uuid": "88af4036-8c3a-4678-9535-31988703d8b6" 00:20:26.435 }, 00:20:26.435 { 00:20:26.435 "base_bdev": "fe1a5c99-8fa4-44d4-8a1b-404f7573c920", 00:20:26.435 "block_size": 4096, 00:20:26.435 "cluster_size": 4194304, 00:20:26.435 "free_clusters": 1276, 00:20:26.435 "name": "lvs_n_0", 00:20:26.435 "total_data_clusters": 1276, 00:20:26.435 "uuid": "ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae" 00:20:26.435 } 00:20:26.435 ]' 00:20:26.435 18:36:33 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae") .free_clusters' 00:20:26.435 18:36:33 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:26.435 18:36:33 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae") .cluster_size' 00:20:26.435 5104 00:20:26.435 18:36:33 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:26.435 18:36:33 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:26.435 18:36:33 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:26.435 18:36:33 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:26.435 18:36:33 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad2e70a7-8d1b-4fbd-8495-bedddd0f0eae lbd_nest_0 5104 00:20:27.001 18:36:34 -- host/perf.sh@88 -- # lb_nested_guid=3af383d0-5d1b-4276-b05a-e511b8c68ac1 00:20:27.001 18:36:34 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.001 18:36:34 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:27.001 18:36:34 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3af383d0-5d1b-4276-b05a-e511b8c68ac1 00:20:27.259 18:36:34 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.517 18:36:34 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:27.517 18:36:34 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:27.517 18:36:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:27.517 18:36:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.517 18:36:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.776 No valid NVMe controllers or AIO or URING devices found 00:20:27.776 Initializing NVMe Controllers 00:20:27.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.776 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:27.776 WARNING: Some requested NVMe devices were skipped 00:20:27.776 18:36:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.776 18:36:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.973 Initializing NVMe Controllers 00:20:39.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.973 Initialization complete. Launching workers. 00:20:39.973 ======================================================== 00:20:39.973 Latency(us) 00:20:39.973 Device Information : IOPS MiB/s Average min max 00:20:39.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 831.90 103.99 1202.17 355.42 8500.97 00:20:39.973 ======================================================== 00:20:39.973 Total : 831.90 103.99 1202.17 355.42 8500.97 00:20:39.973 00:20:39.973 18:36:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:39.973 18:36:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:39.973 18:36:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.973 No valid NVMe controllers or AIO or URING devices found 00:20:39.973 Initializing NVMe Controllers 00:20:39.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.973 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:39.973 WARNING: Some requested NVMe devices were skipped 00:20:39.973 18:36:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:39.973 18:36:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.934 Initializing NVMe Controllers 00:20:49.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.934 Initialization complete. Launching workers. 00:20:49.934 ======================================================== 00:20:49.934 Latency(us) 00:20:49.934 Device Information : IOPS MiB/s Average min max 00:20:49.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1126.50 140.81 28457.01 8006.36 255378.64 00:20:49.934 ======================================================== 00:20:49.934 Total : 1126.50 140.81 28457.01 8006.36 255378.64 00:20:49.934 00:20:49.934 18:36:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:49.934 18:36:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:49.934 18:36:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.934 No valid NVMe controllers or AIO or URING devices found 00:20:49.934 Initializing NVMe Controllers 00:20:49.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.934 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:49.934 WARNING: Some requested NVMe devices were skipped 00:20:49.934 18:36:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:49.934 18:36:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.904 Initializing NVMe Controllers 00:20:59.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.904 Controller IO queue size 128, less than required. 00:20:59.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.904 Initialization complete. Launching workers. 00:20:59.904 ======================================================== 00:20:59.904 Latency(us) 00:20:59.904 Device Information : IOPS MiB/s Average min max 00:20:59.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4051.87 506.48 31588.33 12131.52 70795.85 00:20:59.904 ======================================================== 00:20:59.904 Total : 4051.87 506.48 31588.33 12131.52 70795.85 00:20:59.904 00:20:59.904 18:37:06 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.904 18:37:06 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3af383d0-5d1b-4276-b05a-e511b8c68ac1 00:20:59.904 18:37:07 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:00.163 18:37:07 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe1a5c99-8fa4-44d4-8a1b-404f7573c920 00:21:00.421 18:37:07 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:00.679 18:37:07 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:00.679 18:37:07 -- host/perf.sh@114 -- # nvmftestfini 00:21:00.679 18:37:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:00.679 18:37:07 -- nvmf/common.sh@116 -- # sync 00:21:00.679 18:37:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:00.679 18:37:07 -- nvmf/common.sh@119 -- # set +e 00:21:00.679 18:37:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:00.679 18:37:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:00.679 rmmod nvme_tcp 00:21:00.679 rmmod nvme_fabrics 00:21:00.679 rmmod nvme_keyring 00:21:00.679 18:37:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:00.679 18:37:07 -- nvmf/common.sh@123 -- # set -e 00:21:00.679 18:37:07 -- nvmf/common.sh@124 -- # return 0 00:21:00.679 18:37:07 -- nvmf/common.sh@477 -- # '[' -n 93267 ']' 00:21:00.679 18:37:07 -- nvmf/common.sh@478 -- # killprocess 93267 00:21:00.679 18:37:07 -- common/autotest_common.sh@926 -- # '[' -z 93267 ']' 00:21:00.679 18:37:07 -- common/autotest_common.sh@930 -- # kill -0 93267 00:21:00.679 18:37:07 -- common/autotest_common.sh@931 -- # uname 00:21:00.679 18:37:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.679 18:37:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93267 00:21:00.679 killing process with pid 93267 00:21:00.679 18:37:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:00.679 18:37:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:00.679 18:37:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93267' 00:21:00.679 18:37:07 -- common/autotest_common.sh@945 -- # kill 93267 00:21:00.679 18:37:07 -- common/autotest_common.sh@950 -- # wait 93267 00:21:02.581 18:37:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:02.581 18:37:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:02.582 18:37:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:02.582 18:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.582 18:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.582 18:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.582 18:37:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:02.582 ************************************ 00:21:02.582 END TEST nvmf_perf 00:21:02.582 ************************************ 00:21:02.582 00:21:02.582 real 0m50.642s 00:21:02.582 user 3m12.025s 00:21:02.582 sys 0m10.290s 00:21:02.582 18:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.582 18:37:09 -- common/autotest_common.sh@10 -- # set +x 00:21:02.582 18:37:09 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:02.582 18:37:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:02.582 18:37:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:02.582 18:37:09 -- common/autotest_common.sh@10 -- # set +x 00:21:02.582 ************************************ 00:21:02.582 START TEST nvmf_fio_host 00:21:02.582 ************************************ 00:21:02.582 18:37:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:02.582 * Looking for test storage... 00:21:02.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:02.582 18:37:09 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.582 18:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.582 18:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.582 18:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.582 18:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@5 -- # export PATH 00:21:02.582 18:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:02.582 18:37:09 -- nvmf/common.sh@7 -- # uname -s 00:21:02.582 18:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.582 18:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.582 18:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.582 18:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.582 18:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.582 18:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.582 18:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.582 18:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.582 18:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.582 18:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:21:02.582 18:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:21:02.582 18:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.582 18:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.582 18:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:02.582 18:37:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.582 18:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.582 18:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.582 18:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.582 18:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- paths/export.sh@5 -- # export PATH 00:21:02.582 18:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.582 18:37:09 -- nvmf/common.sh@46 -- # : 0 00:21:02.582 18:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:02.582 18:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:02.582 18:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:02.582 18:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.582 18:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.582 18:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:02.582 18:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:02.582 18:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:02.582 18:37:09 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.582 18:37:09 -- host/fio.sh@14 -- # nvmftestinit 00:21:02.582 18:37:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:02.582 18:37:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.582 18:37:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:02.582 18:37:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:02.582 18:37:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:02.582 18:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.582 18:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.582 18:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.582 18:37:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:02.582 18:37:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:02.582 18:37:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.582 18:37:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.582 18:37:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:02.582 18:37:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:02.582 18:37:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:02.582 18:37:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:02.582 18:37:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:02.582 18:37:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.582 18:37:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:02.582 18:37:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:02.582 18:37:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:02.582 18:37:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:02.582 18:37:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:02.582 18:37:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:02.582 Cannot find device "nvmf_tgt_br" 00:21:02.582 18:37:09 -- nvmf/common.sh@154 -- # true 00:21:02.582 18:37:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.582 Cannot find device "nvmf_tgt_br2" 00:21:02.582 18:37:09 -- nvmf/common.sh@155 -- # true 00:21:02.582 18:37:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:02.582 18:37:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:02.582 Cannot find device "nvmf_tgt_br" 00:21:02.582 18:37:09 -- nvmf/common.sh@157 -- # true 00:21:02.582 18:37:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:02.582 Cannot find device "nvmf_tgt_br2" 00:21:02.582 18:37:09 -- nvmf/common.sh@158 -- # true 00:21:02.582 18:37:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:02.841 18:37:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:02.841 18:37:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.841 18:37:10 -- nvmf/common.sh@161 -- # true 00:21:02.841 18:37:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.841 18:37:10 -- nvmf/common.sh@162 -- # true 00:21:02.841 18:37:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.841 18:37:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.841 18:37:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.841 18:37:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.841 18:37:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.841 18:37:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.841 18:37:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.841 18:37:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:02.841 18:37:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:02.841 18:37:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:02.841 18:37:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:02.841 18:37:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:02.841 18:37:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:02.841 18:37:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.841 18:37:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.841 18:37:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.841 18:37:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:02.841 18:37:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:02.841 18:37:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.841 18:37:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.841 18:37:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.841 18:37:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.841 18:37:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.841 18:37:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:02.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:21:02.841 00:21:02.841 --- 10.0.0.2 ping statistics --- 00:21:02.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.841 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:21:02.841 18:37:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:02.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:21:02.841 00:21:02.841 --- 10.0.0.3 ping statistics --- 00:21:02.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.841 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:02.841 18:37:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:21:02.841 00:21:02.841 --- 10.0.0.1 ping statistics --- 00:21:02.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.841 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:02.841 18:37:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.841 18:37:10 -- nvmf/common.sh@421 -- # return 0 00:21:02.841 18:37:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:02.841 18:37:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.841 18:37:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:02.841 18:37:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:02.841 18:37:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.841 18:37:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:02.841 18:37:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:02.841 18:37:10 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:02.841 18:37:10 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:02.841 18:37:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:02.841 18:37:10 -- common/autotest_common.sh@10 -- # set +x 00:21:02.841 18:37:10 -- host/fio.sh@24 -- # nvmfpid=94226 00:21:02.841 18:37:10 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.841 18:37:10 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.841 18:37:10 -- host/fio.sh@28 -- # waitforlisten 94226 00:21:02.841 18:37:10 -- common/autotest_common.sh@819 -- # '[' -z 94226 ']' 00:21:02.841 18:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.841 18:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.841 18:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.841 18:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.841 18:37:10 -- common/autotest_common.sh@10 -- # set +x 00:21:03.100 [2024-07-14 18:37:10.313283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:03.100 [2024-07-14 18:37:10.313395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.100 [2024-07-14 18:37:10.453816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.359 [2024-07-14 18:37:10.551331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.359 [2024-07-14 18:37:10.551741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.359 [2024-07-14 18:37:10.551797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.359 [2024-07-14 18:37:10.552045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.359 [2024-07-14 18:37:10.552238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.359 [2024-07-14 18:37:10.552369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.359 [2024-07-14 18:37:10.552445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.359 [2024-07-14 18:37:10.552445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.925 18:37:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.925 18:37:11 -- common/autotest_common.sh@852 -- # return 0 00:21:03.925 18:37:11 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:04.181 [2024-07-14 18:37:11.511445] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.181 18:37:11 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:04.181 18:37:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:04.181 18:37:11 -- common/autotest_common.sh@10 -- # set +x 00:21:04.181 18:37:11 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:04.746 Malloc1 00:21:04.746 18:37:11 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.746 18:37:12 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:05.004 18:37:12 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.275 [2024-07-14 18:37:12.564670] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.275 18:37:12 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:05.547 18:37:12 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:05.547 18:37:12 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.547 18:37:12 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.547 18:37:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:05.547 18:37:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.547 18:37:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:05.547 18:37:12 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.547 18:37:12 -- common/autotest_common.sh@1320 -- # shift 00:21:05.547 18:37:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:05.547 18:37:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:05.547 18:37:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:05.547 18:37:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:05.547 18:37:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:05.547 18:37:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:05.547 18:37:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:05.547 18:37:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.806 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:05.806 fio-3.35 00:21:05.806 Starting 1 thread 00:21:08.332 00:21:08.332 test: (groupid=0, jobs=1): err= 0: pid=94356: Sun Jul 14 18:37:15 2024 00:21:08.332 read: IOPS=9647, BW=37.7MiB/s (39.5MB/s)(75.6MiB/2006msec) 00:21:08.332 slat (usec): min=2, max=349, avg= 2.55, stdev= 3.26 00:21:08.332 clat (usec): min=3249, max=12036, avg=7023.86, stdev=579.32 00:21:08.332 lat (usec): min=3308, max=12039, avg=7026.41, stdev=579.13 00:21:08.332 clat percentiles (usec): 00:21:08.332 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:21:08.332 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:21:08.332 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 7963], 00:21:08.332 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[10945], 00:21:08.332 | 99.99th=[11731] 00:21:08.332 bw ( KiB/s): min=37736, max=39336, per=100.00%, avg=38596.00, stdev=658.03, samples=4 00:21:08.332 iops : min= 9434, max= 9834, avg=9649.00, stdev=164.51, samples=4 00:21:08.332 write: IOPS=9656, BW=37.7MiB/s (39.6MB/s)(75.7MiB/2006msec); 0 zone resets 00:21:08.332 slat (usec): min=2, max=248, avg= 2.61, stdev= 1.98 00:21:08.332 clat (usec): min=2491, max=12008, avg=6175.61, stdev=484.72 00:21:08.332 lat (usec): min=2505, max=12011, avg=6178.21, stdev=484.61 00:21:08.332 clat percentiles (usec): 00:21:08.332 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:21:08.332 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:21:08.332 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:21:08.332 | 99.00th=[ 7242], 99.50th=[ 7439], 99.90th=[10683], 99.95th=[11731], 00:21:08.332 | 99.99th=[11994] 00:21:08.332 bw ( KiB/s): min=38280, max=39048, per=99.93%, avg=38596.00, stdev=351.12, samples=4 00:21:08.332 iops : min= 9570, max= 9762, avg=9649.00, stdev=87.78, samples=4 00:21:08.332 lat (msec) : 4=0.07%, 10=99.75%, 20=0.18% 00:21:08.332 cpu : usr=67.88%, sys=22.99%, ctx=10, majf=0, minf=5 00:21:08.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:08.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:08.332 issued rwts: total=19353,19370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:08.332 00:21:08.332 Run status group 0 (all jobs): 00:21:08.332 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=75.6MiB (79.3MB), run=2006-2006msec 00:21:08.332 WRITE: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=75.7MiB (79.3MB), run=2006-2006msec 00:21:08.332 18:37:15 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:08.332 18:37:15 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:08.332 18:37:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:08.332 18:37:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:08.332 18:37:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:08.332 18:37:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.332 18:37:15 -- common/autotest_common.sh@1320 -- # shift 00:21:08.332 18:37:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:08.332 18:37:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:08.332 18:37:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:08.332 18:37:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:08.332 18:37:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:08.332 18:37:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:08.332 18:37:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:08.332 18:37:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:08.332 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:08.332 fio-3.35 00:21:08.332 Starting 1 thread 00:21:10.865 00:21:10.865 test: (groupid=0, jobs=1): err= 0: pid=94405: Sun Jul 14 18:37:17 2024 00:21:10.865 read: IOPS=8492, BW=133MiB/s (139MB/s)(266MiB/2005msec) 00:21:10.865 slat (usec): min=3, max=127, avg= 3.74, stdev= 1.76 00:21:10.865 clat (usec): min=2462, max=16651, avg=8970.50, stdev=2169.12 00:21:10.865 lat (usec): min=2466, max=16654, avg=8974.24, stdev=2169.14 00:21:10.865 clat percentiles (usec): 00:21:10.865 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6980], 00:21:10.865 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:21:10.865 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11469], 95.00th=[12387], 00:21:10.865 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15533], 99.95th=[15664], 00:21:10.865 | 99.99th=[16581] 00:21:10.865 bw ( KiB/s): min=60320, max=79808, per=51.62%, avg=70144.00, stdev=10252.96, samples=4 00:21:10.865 iops : min= 3770, max= 4988, avg=4384.00, stdev=640.81, samples=4 00:21:10.865 write: IOPS=5136, BW=80.3MiB/s (84.2MB/s)(143MiB/1783msec); 0 zone resets 00:21:10.865 slat (usec): min=36, max=325, avg=37.80, stdev= 5.98 00:21:10.865 clat (usec): min=2777, max=18244, avg=10698.94, stdev=1712.57 00:21:10.865 lat (usec): min=2814, max=18281, avg=10736.74, stdev=1712.51 00:21:10.865 clat percentiles (usec): 00:21:10.865 | 1.00th=[ 7046], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:21:10.865 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:21:10.865 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12911], 95.00th=[13829], 00:21:10.865 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17171], 99.95th=[17171], 00:21:10.865 | 99.99th=[18220] 00:21:10.865 bw ( KiB/s): min=63296, max=82944, per=88.69%, avg=72888.00, stdev=10306.79, samples=4 00:21:10.865 iops : min= 3956, max= 5184, avg=4555.50, stdev=644.17, samples=4 00:21:10.865 lat (msec) : 4=0.23%, 10=55.33%, 20=44.45% 00:21:10.865 cpu : usr=71.01%, sys=18.91%, ctx=5, majf=0, minf=1 00:21:10.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:10.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:10.865 issued rwts: total=17027,9158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:10.865 00:21:10.865 Run status group 0 (all jobs): 00:21:10.865 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2005-2005msec 00:21:10.865 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=143MiB (150MB), run=1783-1783msec 00:21:10.865 18:37:17 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.865 18:37:17 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:10.865 18:37:17 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:10.865 18:37:17 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:10.865 18:37:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:10.865 18:37:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:21:10.865 18:37:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:10.865 18:37:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:10.865 18:37:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:10.865 18:37:18 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:10.865 18:37:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:10.865 18:37:18 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:11.124 Nvme0n1 00:21:11.124 18:37:18 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:11.382 18:37:18 -- host/fio.sh@53 -- # ls_guid=47f83d06-43aa-4e8c-8989-0e299d183d62 00:21:11.382 18:37:18 -- host/fio.sh@54 -- # get_lvs_free_mb 47f83d06-43aa-4e8c-8989-0e299d183d62 00:21:11.382 18:37:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=47f83d06-43aa-4e8c-8989-0e299d183d62 00:21:11.382 18:37:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:11.382 18:37:18 -- common/autotest_common.sh@1345 -- # local fc 00:21:11.382 18:37:18 -- common/autotest_common.sh@1346 -- # local cs 00:21:11.382 18:37:18 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:11.640 18:37:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:11.640 { 00:21:11.640 "base_bdev": "Nvme0n1", 00:21:11.640 "block_size": 4096, 00:21:11.640 "cluster_size": 1073741824, 00:21:11.640 "free_clusters": 4, 00:21:11.640 "name": "lvs_0", 00:21:11.640 "total_data_clusters": 4, 00:21:11.640 "uuid": "47f83d06-43aa-4e8c-8989-0e299d183d62" 00:21:11.640 } 00:21:11.640 ]' 00:21:11.641 18:37:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="47f83d06-43aa-4e8c-8989-0e299d183d62") .free_clusters' 00:21:11.641 18:37:18 -- common/autotest_common.sh@1348 -- # fc=4 00:21:11.641 18:37:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="47f83d06-43aa-4e8c-8989-0e299d183d62") .cluster_size' 00:21:11.641 18:37:18 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:21:11.641 18:37:18 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:21:11.641 4096 00:21:11.641 18:37:18 -- common/autotest_common.sh@1353 -- # echo 4096 00:21:11.641 18:37:18 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:11.898 7ceee80f-2f64-4ca6-8f03-50989f1edc94 00:21:11.899 18:37:19 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:12.157 18:37:19 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:12.415 18:37:19 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:12.674 18:37:19 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.674 18:37:19 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.674 18:37:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:12.674 18:37:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.674 18:37:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:12.674 18:37:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.674 18:37:19 -- common/autotest_common.sh@1320 -- # shift 00:21:12.674 18:37:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:12.674 18:37:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:12.674 18:37:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:12.674 18:37:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:12.674 18:37:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:12.674 18:37:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:12.674 18:37:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:12.674 18:37:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.674 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:12.674 fio-3.35 00:21:12.674 Starting 1 thread 00:21:15.205 00:21:15.205 test: (groupid=0, jobs=1): err= 0: pid=94552: Sun Jul 14 18:37:22 2024 00:21:15.205 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec) 00:21:15.205 slat (usec): min=2, max=339, avg= 2.59, stdev= 3.66 00:21:15.205 clat (usec): min=4022, max=17653, avg=10067.63, stdev=953.63 00:21:15.205 lat (usec): min=4032, max=17656, avg=10070.22, stdev=953.40 00:21:15.205 clat percentiles (usec): 00:21:15.205 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:21:15.205 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:21:15.205 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:21:15.205 | 99.00th=[12387], 99.50th=[12649], 99.90th=[16450], 99.95th=[17433], 00:21:15.205 | 99.99th=[17695] 00:21:15.205 bw ( KiB/s): min=26064, max=27512, per=99.85%, avg=27008.00, stdev=675.51, samples=4 00:21:15.205 iops : min= 6516, max= 6878, avg=6752.00, stdev=168.88, samples=4 00:21:15.205 write: IOPS=6760, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec); 0 zone resets 00:21:15.205 slat (usec): min=2, max=292, avg= 2.68, stdev= 2.72 00:21:15.205 clat (usec): min=2381, max=17191, avg=8777.40, stdev=810.80 00:21:15.205 lat (usec): min=2393, max=17193, avg=8780.08, stdev=810.64 00:21:15.205 clat percentiles (usec): 00:21:15.205 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8160], 00:21:15.205 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:21:15.205 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:21:15.205 | 99.00th=[10552], 99.50th=[10683], 99.90th=[13960], 99.95th=[16319], 00:21:15.205 | 99.99th=[16581] 00:21:15.205 bw ( KiB/s): min=26816, max=27264, per=100.00%, avg=27046.00, stdev=203.16, samples=4 00:21:15.205 iops : min= 6704, max= 6816, avg=6761.50, stdev=50.79, samples=4 00:21:15.205 lat (msec) : 4=0.04%, 10=71.84%, 20=28.11% 00:21:15.205 cpu : usr=70.50%, sys=22.82%, ctx=24, majf=0, minf=5 00:21:15.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:15.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.205 issued rwts: total=13578,13576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.205 00:21:15.205 Run status group 0 (all jobs): 00:21:15.205 READ: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:21:15.205 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:21:15.205 18:37:22 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:15.462 18:37:22 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:15.720 18:37:22 -- host/fio.sh@64 -- # ls_nested_guid=b5e095b0-ac94-4181-be63-9e21f6992b98 00:21:15.720 18:37:22 -- host/fio.sh@65 -- # get_lvs_free_mb b5e095b0-ac94-4181-be63-9e21f6992b98 00:21:15.720 18:37:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b5e095b0-ac94-4181-be63-9e21f6992b98 00:21:15.720 18:37:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:15.720 18:37:22 -- common/autotest_common.sh@1345 -- # local fc 00:21:15.720 18:37:22 -- common/autotest_common.sh@1346 -- # local cs 00:21:15.720 18:37:22 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:15.977 18:37:23 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:15.977 { 00:21:15.977 "base_bdev": "Nvme0n1", 00:21:15.977 "block_size": 4096, 00:21:15.977 "cluster_size": 1073741824, 00:21:15.977 "free_clusters": 0, 00:21:15.977 "name": "lvs_0", 00:21:15.977 "total_data_clusters": 4, 00:21:15.977 "uuid": "47f83d06-43aa-4e8c-8989-0e299d183d62" 00:21:15.977 }, 00:21:15.977 { 00:21:15.977 "base_bdev": "7ceee80f-2f64-4ca6-8f03-50989f1edc94", 00:21:15.977 "block_size": 4096, 00:21:15.977 "cluster_size": 4194304, 00:21:15.977 "free_clusters": 1022, 00:21:15.977 "name": "lvs_n_0", 00:21:15.977 "total_data_clusters": 1022, 00:21:15.977 "uuid": "b5e095b0-ac94-4181-be63-9e21f6992b98" 00:21:15.977 } 00:21:15.977 ]' 00:21:15.977 18:37:23 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b5e095b0-ac94-4181-be63-9e21f6992b98") .free_clusters' 00:21:15.977 18:37:23 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:15.977 18:37:23 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b5e095b0-ac94-4181-be63-9e21f6992b98") .cluster_size' 00:21:15.977 18:37:23 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:15.977 18:37:23 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:15.977 4088 00:21:15.977 18:37:23 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:15.977 18:37:23 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:16.234 bbae4ad4-88d6-4543-992d-ec5b7a118374 00:21:16.234 18:37:23 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:16.492 18:37:23 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:16.750 18:37:23 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:17.009 18:37:24 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.009 18:37:24 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.009 18:37:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:17.009 18:37:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.009 18:37:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:17.009 18:37:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.009 18:37:24 -- common/autotest_common.sh@1320 -- # shift 00:21:17.009 18:37:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:17.009 18:37:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:17.009 18:37:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:17.009 18:37:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:17.009 18:37:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:17.009 18:37:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:17.009 18:37:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:17.009 18:37:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.009 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:17.009 fio-3.35 00:21:17.009 Starting 1 thread 00:21:19.591 00:21:19.591 test: (groupid=0, jobs=1): err= 0: pid=94672: Sun Jul 14 18:37:26 2024 00:21:19.591 read: IOPS=5924, BW=23.1MiB/s (24.3MB/s)(46.5MiB/2009msec) 00:21:19.591 slat (nsec): min=1865, max=347196, avg=2693.53, stdev=4441.12 00:21:19.591 clat (usec): min=4863, max=19315, avg=11493.73, stdev=1093.12 00:21:19.591 lat (usec): min=4873, max=19318, avg=11496.42, stdev=1092.91 00:21:19.591 clat percentiles (usec): 00:21:19.591 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:21:19.591 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:21:19.591 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:21:19.591 | 99.00th=[14091], 99.50th=[14484], 99.90th=[17695], 99.95th=[18482], 00:21:19.591 | 99.99th=[19268] 00:21:19.591 bw ( KiB/s): min=22520, max=24160, per=99.87%, avg=23668.00, stdev=771.73, samples=4 00:21:19.591 iops : min= 5630, max= 6040, avg=5917.00, stdev=192.93, samples=4 00:21:19.591 write: IOPS=5918, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec); 0 zone resets 00:21:19.591 slat (nsec): min=1980, max=261921, avg=2778.61, stdev=3230.65 00:21:19.591 clat (usec): min=2513, max=18786, avg=10034.94, stdev=951.97 00:21:19.591 lat (usec): min=2526, max=18788, avg=10037.71, stdev=951.87 00:21:19.591 clat percentiles (usec): 00:21:19.592 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:19.592 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:21:19.592 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:21:19.592 | 99.00th=[12125], 99.50th=[12387], 99.90th=[17957], 99.95th=[18482], 00:21:19.592 | 99.99th=[18744] 00:21:19.592 bw ( KiB/s): min=23384, max=23808, per=99.96%, avg=23666.00, stdev=191.15, samples=4 00:21:19.592 iops : min= 5846, max= 5952, avg=5916.50, stdev=47.79, samples=4 00:21:19.592 lat (msec) : 4=0.03%, 10=28.00%, 20=71.97% 00:21:19.592 cpu : usr=70.77%, sys=22.56%, ctx=3, majf=0, minf=5 00:21:19.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:19.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.592 issued rwts: total=11903,11891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.592 00:21:19.592 Run status group 0 (all jobs): 00:21:19.592 READ: bw=23.1MiB/s (24.3MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2009-2009msec 00:21:19.592 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.7MB), run=2009-2009msec 00:21:19.592 18:37:26 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:19.592 18:37:26 -- host/fio.sh@74 -- # sync 00:21:19.592 18:37:26 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:19.850 18:37:27 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:20.108 18:37:27 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:20.367 18:37:27 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:20.625 18:37:27 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:21.560 18:37:28 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:21.560 18:37:28 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:21.560 18:37:28 -- host/fio.sh@86 -- # nvmftestfini 00:21:21.560 18:37:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:21.560 18:37:28 -- nvmf/common.sh@116 -- # sync 00:21:21.560 18:37:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:21.560 18:37:28 -- nvmf/common.sh@119 -- # set +e 00:21:21.560 18:37:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:21.560 18:37:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:21.560 rmmod nvme_tcp 00:21:21.560 rmmod nvme_fabrics 00:21:21.560 rmmod nvme_keyring 00:21:21.560 18:37:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:21.560 18:37:28 -- nvmf/common.sh@123 -- # set -e 00:21:21.560 18:37:28 -- nvmf/common.sh@124 -- # return 0 00:21:21.560 18:37:28 -- nvmf/common.sh@477 -- # '[' -n 94226 ']' 00:21:21.560 18:37:28 -- nvmf/common.sh@478 -- # killprocess 94226 00:21:21.560 18:37:28 -- common/autotest_common.sh@926 -- # '[' -z 94226 ']' 00:21:21.560 18:37:28 -- common/autotest_common.sh@930 -- # kill -0 94226 00:21:21.560 18:37:28 -- common/autotest_common.sh@931 -- # uname 00:21:21.560 18:37:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:21.560 18:37:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94226 00:21:21.560 killing process with pid 94226 00:21:21.560 18:37:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:21.560 18:37:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:21.560 18:37:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94226' 00:21:21.560 18:37:28 -- common/autotest_common.sh@945 -- # kill 94226 00:21:21.560 18:37:28 -- common/autotest_common.sh@950 -- # wait 94226 00:21:21.817 18:37:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:21.817 18:37:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:21.817 18:37:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:21.817 18:37:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.817 18:37:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:21.817 18:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.817 18:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.817 18:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.817 18:37:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:21.817 ************************************ 00:21:21.817 END TEST nvmf_fio_host 00:21:21.817 ************************************ 00:21:21.817 00:21:21.817 real 0m19.302s 00:21:21.817 user 1m24.944s 00:21:21.817 sys 0m4.301s 00:21:21.817 18:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.817 18:37:29 -- common/autotest_common.sh@10 -- # set +x 00:21:21.817 18:37:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:21.817 18:37:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:21.817 18:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:21.817 18:37:29 -- common/autotest_common.sh@10 -- # set +x 00:21:21.817 ************************************ 00:21:21.817 START TEST nvmf_failover 00:21:21.817 ************************************ 00:21:21.817 18:37:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:21.817 * Looking for test storage... 00:21:21.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:21.817 18:37:29 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.817 18:37:29 -- nvmf/common.sh@7 -- # uname -s 00:21:22.075 18:37:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.075 18:37:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.075 18:37:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.075 18:37:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.075 18:37:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.075 18:37:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.075 18:37:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.075 18:37:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.075 18:37:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.075 18:37:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.075 18:37:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:21:22.075 18:37:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:21:22.075 18:37:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.075 18:37:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.075 18:37:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.075 18:37:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.075 18:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.075 18:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.075 18:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.075 18:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.075 18:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.075 18:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.075 18:37:29 -- paths/export.sh@5 -- # export PATH 00:21:22.075 18:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.075 18:37:29 -- nvmf/common.sh@46 -- # : 0 00:21:22.075 18:37:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:22.075 18:37:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:22.075 18:37:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:22.076 18:37:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.076 18:37:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.076 18:37:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:22.076 18:37:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:22.076 18:37:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:22.076 18:37:29 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.076 18:37:29 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.076 18:37:29 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.076 18:37:29 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.076 18:37:29 -- host/failover.sh@18 -- # nvmftestinit 00:21:22.076 18:37:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:22.076 18:37:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.076 18:37:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:22.076 18:37:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:22.076 18:37:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:22.076 18:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.076 18:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.076 18:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.076 18:37:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:22.076 18:37:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:22.076 18:37:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:22.076 18:37:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:22.076 18:37:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:22.076 18:37:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:22.076 18:37:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.076 18:37:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.076 18:37:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:22.076 18:37:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:22.076 18:37:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.076 18:37:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.076 18:37:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.076 18:37:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.076 18:37:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.076 18:37:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.076 18:37:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.076 18:37:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.076 18:37:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:22.076 18:37:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:22.076 Cannot find device "nvmf_tgt_br" 00:21:22.076 18:37:29 -- nvmf/common.sh@154 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.076 Cannot find device "nvmf_tgt_br2" 00:21:22.076 18:37:29 -- nvmf/common.sh@155 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:22.076 18:37:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:22.076 Cannot find device "nvmf_tgt_br" 00:21:22.076 18:37:29 -- nvmf/common.sh@157 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:22.076 Cannot find device "nvmf_tgt_br2" 00:21:22.076 18:37:29 -- nvmf/common.sh@158 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:22.076 18:37:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:22.076 18:37:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.076 18:37:29 -- nvmf/common.sh@161 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.076 18:37:29 -- nvmf/common.sh@162 -- # true 00:21:22.076 18:37:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.076 18:37:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.076 18:37:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.076 18:37:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.076 18:37:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.076 18:37:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.076 18:37:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.076 18:37:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:22.076 18:37:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:22.076 18:37:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:22.076 18:37:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:22.076 18:37:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:22.076 18:37:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:22.076 18:37:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.076 18:37:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.076 18:37:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.333 18:37:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:22.333 18:37:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:22.333 18:37:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:22.333 18:37:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:22.333 18:37:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:22.333 18:37:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:22.333 18:37:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:22.333 18:37:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:22.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:22.333 00:21:22.333 --- 10.0.0.2 ping statistics --- 00:21:22.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.333 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:22.333 18:37:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:22.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:22.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:21:22.333 00:21:22.333 --- 10.0.0.3 ping statistics --- 00:21:22.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.333 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:22.333 18:37:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:22.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:21:22.333 00:21:22.333 --- 10.0.0.1 ping statistics --- 00:21:22.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.333 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:21:22.333 18:37:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.333 18:37:29 -- nvmf/common.sh@421 -- # return 0 00:21:22.333 18:37:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:22.333 18:37:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.333 18:37:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:22.333 18:37:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:22.333 18:37:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.333 18:37:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:22.333 18:37:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:22.333 18:37:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:22.333 18:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.333 18:37:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:22.333 18:37:29 -- common/autotest_common.sh@10 -- # set +x 00:21:22.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.333 18:37:29 -- nvmf/common.sh@469 -- # nvmfpid=94948 00:21:22.333 18:37:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:22.333 18:37:29 -- nvmf/common.sh@470 -- # waitforlisten 94948 00:21:22.333 18:37:29 -- common/autotest_common.sh@819 -- # '[' -z 94948 ']' 00:21:22.333 18:37:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.333 18:37:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.333 18:37:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.333 18:37:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.333 18:37:29 -- common/autotest_common.sh@10 -- # set +x 00:21:22.333 [2024-07-14 18:37:29.640802] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:22.333 [2024-07-14 18:37:29.641069] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.590 [2024-07-14 18:37:29.777013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:22.590 [2024-07-14 18:37:29.847119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.590 [2024-07-14 18:37:29.847446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.590 [2024-07-14 18:37:29.847627] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.590 [2024-07-14 18:37:29.847756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.590 [2024-07-14 18:37:29.848094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.590 [2024-07-14 18:37:29.848173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.590 [2024-07-14 18:37:29.848178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.522 18:37:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.522 18:37:30 -- common/autotest_common.sh@852 -- # return 0 00:21:23.522 18:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:23.522 18:37:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:23.522 18:37:30 -- common/autotest_common.sh@10 -- # set +x 00:21:23.522 18:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.522 18:37:30 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:23.522 [2024-07-14 18:37:30.902113] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.522 18:37:30 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:23.780 Malloc0 00:21:24.039 18:37:31 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.039 18:37:31 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.297 18:37:31 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.555 [2024-07-14 18:37:31.859702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.555 18:37:31 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:24.813 [2024-07-14 18:37:32.059813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:24.813 18:37:32 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:25.083 [2024-07-14 18:37:32.320124] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:25.083 18:37:32 -- host/failover.sh@31 -- # bdevperf_pid=95066 00:21:25.083 18:37:32 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:25.083 18:37:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.083 18:37:32 -- host/failover.sh@34 -- # waitforlisten 95066 /var/tmp/bdevperf.sock 00:21:25.083 18:37:32 -- common/autotest_common.sh@819 -- # '[' -z 95066 ']' 00:21:25.084 18:37:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.084 18:37:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:25.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.084 18:37:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.084 18:37:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:25.084 18:37:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.034 18:37:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:26.034 18:37:33 -- common/autotest_common.sh@852 -- # return 0 00:21:26.034 18:37:33 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.292 NVMe0n1 00:21:26.292 18:37:33 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.550 00:21:26.550 18:37:33 -- host/failover.sh@39 -- # run_test_pid=95115 00:21:26.550 18:37:33 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.550 18:37:33 -- host/failover.sh@41 -- # sleep 1 00:21:27.923 18:37:34 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.923 [2024-07-14 18:37:35.214014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.923 [2024-07-14 18:37:35.214177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 [2024-07-14 18:37:35.214678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81800 is same with the state(5) to be set 00:21:27.924 18:37:35 -- host/failover.sh@45 -- # sleep 3 00:21:31.205 18:37:38 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.205 00:21:31.205 18:37:38 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.463 [2024-07-14 18:37:38.778604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 [2024-07-14 18:37:38.778856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82ef0 is same with the state(5) to be set 00:21:31.463 18:37:38 -- host/failover.sh@50 -- # sleep 3 00:21:34.746 18:37:41 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.746 [2024-07-14 18:37:42.050094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.746 18:37:42 -- host/failover.sh@55 -- # sleep 1 00:21:35.791 18:37:43 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:36.049 [2024-07-14 18:37:43.316742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.049 [2024-07-14 18:37:43.316915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.050 [2024-07-14 18:37:43.316923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.050 [2024-07-14 18:37:43.316931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd835d0 is same with the state(5) to be set 00:21:36.050 18:37:43 -- host/failover.sh@59 -- # wait 95115 00:21:42.618 0 00:21:42.618 18:37:49 -- host/failover.sh@61 -- # killprocess 95066 00:21:42.618 18:37:49 -- common/autotest_common.sh@926 -- # '[' -z 95066 ']' 00:21:42.618 18:37:49 -- common/autotest_common.sh@930 -- # kill -0 95066 00:21:42.618 18:37:49 -- common/autotest_common.sh@931 -- # uname 00:21:42.618 18:37:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:42.618 18:37:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95066 00:21:42.618 killing process with pid 95066 00:21:42.618 18:37:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:42.618 18:37:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:42.618 18:37:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95066' 00:21:42.618 18:37:49 -- common/autotest_common.sh@945 -- # kill 95066 00:21:42.618 18:37:49 -- common/autotest_common.sh@950 -- # wait 95066 00:21:42.618 18:37:49 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:42.618 [2024-07-14 18:37:32.379467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:42.618 [2024-07-14 18:37:32.379697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95066 ] 00:21:42.618 [2024-07-14 18:37:32.518611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.618 [2024-07-14 18:37:32.585622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.618 Running I/O for 15 seconds... 00:21:42.618 [2024-07-14 18:37:35.214978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.618 [2024-07-14 18:37:35.215359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.618 [2024-07-14 18:37:35.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.215971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.215984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.619 [2024-07-14 18:37:35.216269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.619 [2024-07-14 18:37:35.216281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.620 [2024-07-14 18:37:35.216930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.620 [2024-07-14 18:37:35.217231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.620 [2024-07-14 18:37:35.217243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.217812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.217984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.217998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.621 [2024-07-14 18:37:35.218010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.621 [2024-07-14 18:37:35.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.621 [2024-07-14 18:37:35.218192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.622 [2024-07-14 18:37:35.218593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.622 [2024-07-14 18:37:35.218884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.218898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d9f70 is same with the state(5) to be set 00:21:42.622 [2024-07-14 18:37:35.218914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.622 [2024-07-14 18:37:35.218923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.622 [2024-07-14 18:37:35.218933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122408 len:8 PRP1 0x0 PRP2 0x0 00:21:42.622 [2024-07-14 18:37:35.218961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.219026] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15d9f70 was disconnected and freed. reset controller. 00:21:42.622 [2024-07-14 18:37:35.219042] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:42.622 [2024-07-14 18:37:35.219093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.622 [2024-07-14 18:37:35.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.219126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.622 [2024-07-14 18:37:35.219138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.219156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.622 [2024-07-14 18:37:35.219169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.219198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.622 [2024-07-14 18:37:35.219211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.622 [2024-07-14 18:37:35.219223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.622 [2024-07-14 18:37:35.221750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.622 [2024-07-14 18:37:35.221784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15baf20 (9): Bad file descriptor 00:21:42.622 [2024-07-14 18:37:35.249378] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.622 [2024-07-14 18:37:38.778965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.623 [2024-07-14 18:37:38.779832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.623 [2024-07-14 18:37:38.779912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.623 [2024-07-14 18:37:38.779949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.779963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.779975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.780000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.623 [2024-07-14 18:37:38.780012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.623 [2024-07-14 18:37:38.780025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.624 [2024-07-14 18:37:38.780663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.624 [2024-07-14 18:37:38.780713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.624 [2024-07-14 18:37:38.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.780782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.780971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.780984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.780996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.781530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.781558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.625 [2024-07-14 18:37:38.781626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.625 [2024-07-14 18:37:38.781654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.625 [2024-07-14 18:37:38.781669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.781710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.781738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.781793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.781983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.781997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.626 [2024-07-14 18:37:38.782530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.626 [2024-07-14 18:37:38.782641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.626 [2024-07-14 18:37:38.782662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.782977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.782991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.783004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:38.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780070 is same with the state(5) to be set 00:21:42.627 [2024-07-14 18:37:38.783071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.627 [2024-07-14 18:37:38.783080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.627 [2024-07-14 18:37:38.783091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113360 len:8 PRP1 0x0 PRP2 0x0 00:21:42.627 [2024-07-14 18:37:38.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783175] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1780070 was disconnected and freed. reset controller. 00:21:42.627 [2024-07-14 18:37:38.783192] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:42.627 [2024-07-14 18:37:38.783253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.627 [2024-07-14 18:37:38.783274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.627 [2024-07-14 18:37:38.783302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.627 [2024-07-14 18:37:38.783328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.627 [2024-07-14 18:37:38.783354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:38.783367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.627 [2024-07-14 18:37:38.783415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15baf20 (9): Bad file descriptor 00:21:42.627 [2024-07-14 18:37:38.786133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.627 [2024-07-14 18:37:38.813875] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.627 [2024-07-14 18:37:43.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.627 [2024-07-14 18:37:43.317320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.627 [2024-07-14 18:37:43.317335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.628 [2024-07-14 18:37:43.317968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.317983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.317996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.628 [2024-07-14 18:37:43.318209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.628 [2024-07-14 18:37:43.318222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.318926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.318983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.318997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.319011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.319032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.629 [2024-07-14 18:37:43.319045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.629 [2024-07-14 18:37:43.319060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.629 [2024-07-14 18:37:43.319072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.631 [2024-07-14 18:37:43.319735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.631 [2024-07-14 18:37:43.319750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.631 [2024-07-14 18:37:43.319763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.319986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.319999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.632 [2024-07-14 18:37:43.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.632 [2024-07-14 18:37:43.320725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.632 [2024-07-14 18:37:43.320743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.320758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.633 [2024-07-14 18:37:43.320771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.320786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.633 [2024-07-14 18:37:43.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.320813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7390 is same with the state(5) to be set 00:21:42.633 [2024-07-14 18:37:43.320829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.633 [2024-07-14 18:37:43.320839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.633 [2024-07-14 18:37:43.320849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:21:42.633 [2024-07-14 18:37:43.320862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.320918] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15c7390 was disconnected and freed. reset controller. 00:21:42.633 [2024-07-14 18:37:43.320936] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:42.633 [2024-07-14 18:37:43.320997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.633 [2024-07-14 18:37:43.321019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.321033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.633 [2024-07-14 18:37:43.321046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.321060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.633 [2024-07-14 18:37:43.321072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.321086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.633 [2024-07-14 18:37:43.321098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.633 [2024-07-14 18:37:43.321111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.633 [2024-07-14 18:37:43.323511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.633 [2024-07-14 18:37:43.323549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15baf20 (9): Bad file descriptor 00:21:42.633 [2024-07-14 18:37:43.349911] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.633 00:21:42.633 Latency(us) 00:21:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:42.633 Verification LBA range: start 0x0 length 0x4000 00:21:42.633 NVMe0n1 : 15.01 13221.99 51.65 267.94 0.00 9470.55 640.47 16086.11 00:21:42.633 =================================================================================================================== 00:21:42.633 Total : 13221.99 51.65 267.94 0.00 9470.55 640.47 16086.11 00:21:42.633 Received shutdown signal, test time was about 15.000000 seconds 00:21:42.633 00:21:42.633 Latency(us) 00:21:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.633 =================================================================================================================== 00:21:42.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.633 18:37:49 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:42.633 18:37:49 -- host/failover.sh@65 -- # count=3 00:21:42.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.633 18:37:49 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:42.633 18:37:49 -- host/failover.sh@73 -- # bdevperf_pid=95318 00:21:42.633 18:37:49 -- host/failover.sh@75 -- # waitforlisten 95318 /var/tmp/bdevperf.sock 00:21:42.633 18:37:49 -- common/autotest_common.sh@819 -- # '[' -z 95318 ']' 00:21:42.633 18:37:49 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:42.633 18:37:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.633 18:37:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:42.633 18:37:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.633 18:37:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:42.633 18:37:49 -- common/autotest_common.sh@10 -- # set +x 00:21:43.200 18:37:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.200 18:37:50 -- common/autotest_common.sh@852 -- # return 0 00:21:43.200 18:37:50 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.200 [2024-07-14 18:37:50.591258] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.200 18:37:50 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:43.458 [2024-07-14 18:37:50.811427] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:43.458 18:37:50 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.717 NVMe0n1 00:21:43.717 18:37:51 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.285 00:21:44.285 18:37:51 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.285 00:21:44.543 18:37:51 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.543 18:37:51 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:44.544 18:37:51 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.802 18:37:52 -- host/failover.sh@87 -- # sleep 3 00:21:48.159 18:37:55 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.159 18:37:55 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:48.159 18:37:55 -- host/failover.sh@90 -- # run_test_pid=95456 00:21:48.159 18:37:55 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.159 18:37:55 -- host/failover.sh@92 -- # wait 95456 00:21:49.534 0 00:21:49.534 18:37:56 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:49.534 [2024-07-14 18:37:49.403320] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:49.534 [2024-07-14 18:37:49.403432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95318 ] 00:21:49.534 [2024-07-14 18:37:49.544137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.534 [2024-07-14 18:37:49.631286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.534 [2024-07-14 18:37:52.161518] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:49.534 [2024-07-14 18:37:52.161630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.534 [2024-07-14 18:37:52.161655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.534 [2024-07-14 18:37:52.161673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.534 [2024-07-14 18:37:52.161686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.534 [2024-07-14 18:37:52.161700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.534 [2024-07-14 18:37:52.161713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.534 [2024-07-14 18:37:52.161727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.534 [2024-07-14 18:37:52.161740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.534 [2024-07-14 18:37:52.161753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.534 [2024-07-14 18:37:52.161800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.534 [2024-07-14 18:37:52.161830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05f20 (9): Bad file descriptor 00:21:49.534 [2024-07-14 18:37:52.169317] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.534 Running I/O for 1 seconds... 00:21:49.534 00:21:49.534 Latency(us) 00:21:49.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.534 Verification LBA range: start 0x0 length 0x4000 00:21:49.534 NVMe0n1 : 1.01 12930.17 50.51 0.00 0.00 9850.77 1496.90 16205.27 00:21:49.534 =================================================================================================================== 00:21:49.534 Total : 12930.17 50.51 0.00 0.00 9850.77 1496.90 16205.27 00:21:49.534 18:37:56 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:49.534 18:37:56 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:49.534 18:37:56 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.793 18:37:57 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:49.793 18:37:57 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.051 18:37:57 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.309 18:37:57 -- host/failover.sh@101 -- # sleep 3 00:21:53.591 18:38:00 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.591 18:38:00 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:53.591 18:38:00 -- host/failover.sh@108 -- # killprocess 95318 00:21:53.591 18:38:00 -- common/autotest_common.sh@926 -- # '[' -z 95318 ']' 00:21:53.591 18:38:00 -- common/autotest_common.sh@930 -- # kill -0 95318 00:21:53.591 18:38:00 -- common/autotest_common.sh@931 -- # uname 00:21:53.591 18:38:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:53.591 18:38:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95318 00:21:53.591 killing process with pid 95318 00:21:53.591 18:38:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:53.591 18:38:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:53.591 18:38:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95318' 00:21:53.591 18:38:00 -- common/autotest_common.sh@945 -- # kill 95318 00:21:53.591 18:38:00 -- common/autotest_common.sh@950 -- # wait 95318 00:21:53.849 18:38:01 -- host/failover.sh@110 -- # sync 00:21:53.849 18:38:01 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.107 18:38:01 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:54.107 18:38:01 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:54.107 18:38:01 -- host/failover.sh@116 -- # nvmftestfini 00:21:54.107 18:38:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.107 18:38:01 -- nvmf/common.sh@116 -- # sync 00:21:54.107 18:38:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.107 18:38:01 -- nvmf/common.sh@119 -- # set +e 00:21:54.107 18:38:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.107 18:38:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.107 rmmod nvme_tcp 00:21:54.107 rmmod nvme_fabrics 00:21:54.107 rmmod nvme_keyring 00:21:54.107 18:38:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.107 18:38:01 -- nvmf/common.sh@123 -- # set -e 00:21:54.107 18:38:01 -- nvmf/common.sh@124 -- # return 0 00:21:54.107 18:38:01 -- nvmf/common.sh@477 -- # '[' -n 94948 ']' 00:21:54.107 18:38:01 -- nvmf/common.sh@478 -- # killprocess 94948 00:21:54.107 18:38:01 -- common/autotest_common.sh@926 -- # '[' -z 94948 ']' 00:21:54.107 18:38:01 -- common/autotest_common.sh@930 -- # kill -0 94948 00:21:54.107 18:38:01 -- common/autotest_common.sh@931 -- # uname 00:21:54.107 18:38:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.107 18:38:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94948 00:21:54.107 killing process with pid 94948 00:21:54.107 18:38:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:54.107 18:38:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:54.107 18:38:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94948' 00:21:54.107 18:38:01 -- common/autotest_common.sh@945 -- # kill 94948 00:21:54.107 18:38:01 -- common/autotest_common.sh@950 -- # wait 94948 00:21:54.366 18:38:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.366 18:38:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.366 18:38:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.366 18:38:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.366 18:38:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.366 18:38:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.366 18:38:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.366 18:38:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.366 18:38:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:54.366 00:21:54.366 real 0m32.510s 00:21:54.366 user 2m6.256s 00:21:54.366 sys 0m4.825s 00:21:54.366 18:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.366 18:38:01 -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 ************************************ 00:21:54.366 END TEST nvmf_failover 00:21:54.366 ************************************ 00:21:54.366 18:38:01 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:54.366 18:38:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:54.366 18:38:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:54.366 18:38:01 -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 ************************************ 00:21:54.366 START TEST nvmf_discovery 00:21:54.366 ************************************ 00:21:54.366 18:38:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:54.624 * Looking for test storage... 00:21:54.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:54.624 18:38:01 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.624 18:38:01 -- nvmf/common.sh@7 -- # uname -s 00:21:54.624 18:38:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.624 18:38:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.624 18:38:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.624 18:38:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.624 18:38:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.624 18:38:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.624 18:38:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.624 18:38:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.624 18:38:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.624 18:38:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:21:54.624 18:38:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:21:54.624 18:38:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.624 18:38:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.624 18:38:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:54.624 18:38:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.624 18:38:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.624 18:38:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.624 18:38:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.624 18:38:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.624 18:38:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.624 18:38:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.624 18:38:01 -- paths/export.sh@5 -- # export PATH 00:21:54.624 18:38:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.624 18:38:01 -- nvmf/common.sh@46 -- # : 0 00:21:54.624 18:38:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:54.624 18:38:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:54.624 18:38:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:54.624 18:38:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.624 18:38:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.624 18:38:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:54.624 18:38:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:54.624 18:38:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:54.624 18:38:01 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:54.624 18:38:01 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:54.624 18:38:01 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:54.624 18:38:01 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:54.624 18:38:01 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:54.624 18:38:01 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:54.624 18:38:01 -- host/discovery.sh@25 -- # nvmftestinit 00:21:54.624 18:38:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:54.624 18:38:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.624 18:38:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:54.624 18:38:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:54.624 18:38:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:54.624 18:38:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.624 18:38:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.624 18:38:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.624 18:38:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:54.624 18:38:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:54.624 18:38:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.624 18:38:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.624 18:38:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:54.624 18:38:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:54.624 18:38:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:54.624 18:38:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:54.624 18:38:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:54.624 18:38:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.624 18:38:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:54.624 18:38:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:54.624 18:38:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:54.624 18:38:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:54.624 18:38:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:54.624 18:38:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:54.624 Cannot find device "nvmf_tgt_br" 00:21:54.624 18:38:01 -- nvmf/common.sh@154 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.624 Cannot find device "nvmf_tgt_br2" 00:21:54.624 18:38:01 -- nvmf/common.sh@155 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:54.624 18:38:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:54.624 Cannot find device "nvmf_tgt_br" 00:21:54.624 18:38:01 -- nvmf/common.sh@157 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:54.624 Cannot find device "nvmf_tgt_br2" 00:21:54.624 18:38:01 -- nvmf/common.sh@158 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:54.624 18:38:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:54.624 18:38:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.624 18:38:01 -- nvmf/common.sh@161 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.624 18:38:01 -- nvmf/common.sh@162 -- # true 00:21:54.624 18:38:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.624 18:38:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.624 18:38:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.624 18:38:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.624 18:38:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.624 18:38:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.624 18:38:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.624 18:38:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:54.882 18:38:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:54.882 18:38:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:54.882 18:38:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:54.882 18:38:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:54.882 18:38:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:54.882 18:38:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.882 18:38:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.882 18:38:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.882 18:38:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:54.882 18:38:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:54.882 18:38:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.882 18:38:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.882 18:38:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.882 18:38:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.882 18:38:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.882 18:38:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:54.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:54.882 00:21:54.882 --- 10.0.0.2 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:54.882 18:38:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:54.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:54.882 00:21:54.882 --- 10.0.0.3 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:54.882 18:38:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:54.882 00:21:54.882 --- 10.0.0.1 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:54.882 18:38:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.882 18:38:02 -- nvmf/common.sh@421 -- # return 0 00:21:54.882 18:38:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:54.882 18:38:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.882 18:38:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:54.882 18:38:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:54.882 18:38:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.882 18:38:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:54.882 18:38:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:54.882 18:38:02 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:54.882 18:38:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:54.882 18:38:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:54.882 18:38:02 -- common/autotest_common.sh@10 -- # set +x 00:21:54.882 18:38:02 -- nvmf/common.sh@469 -- # nvmfpid=95751 00:21:54.882 18:38:02 -- nvmf/common.sh@470 -- # waitforlisten 95751 00:21:54.882 18:38:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:54.882 18:38:02 -- common/autotest_common.sh@819 -- # '[' -z 95751 ']' 00:21:54.882 18:38:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.882 18:38:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:54.882 18:38:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.882 18:38:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:54.882 18:38:02 -- common/autotest_common.sh@10 -- # set +x 00:21:54.882 [2024-07-14 18:38:02.250677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:54.882 [2024-07-14 18:38:02.250769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.146 [2024-07-14 18:38:02.392508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.146 [2024-07-14 18:38:02.468717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:55.146 [2024-07-14 18:38:02.468879] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.146 [2024-07-14 18:38:02.468925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.146 [2024-07-14 18:38:02.468933] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.146 [2024-07-14 18:38:02.468956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.083 18:38:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:56.083 18:38:03 -- common/autotest_common.sh@852 -- # return 0 00:21:56.083 18:38:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:56.083 18:38:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 18:38:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.083 18:38:03 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.083 18:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 [2024-07-14 18:38:03.296093] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.083 18:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.083 18:38:03 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:56.083 18:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 [2024-07-14 18:38:03.304185] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:56.083 18:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.083 18:38:03 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:56.083 18:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 null0 00:21:56.083 18:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.083 18:38:03 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:56.083 18:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 null1 00:21:56.083 18:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.083 18:38:03 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:56.083 18:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 18:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.083 18:38:03 -- host/discovery.sh@45 -- # hostpid=95802 00:21:56.083 18:38:03 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:56.083 18:38:03 -- host/discovery.sh@46 -- # waitforlisten 95802 /tmp/host.sock 00:21:56.083 18:38:03 -- common/autotest_common.sh@819 -- # '[' -z 95802 ']' 00:21:56.083 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:56.083 18:38:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:56.083 18:38:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.083 18:38:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:56.083 18:38:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.083 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:21:56.083 [2024-07-14 18:38:03.400792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:56.083 [2024-07-14 18:38:03.400887] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95802 ] 00:21:56.341 [2024-07-14 18:38:03.540060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.341 [2024-07-14 18:38:03.617050] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.341 [2024-07-14 18:38:03.617226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.273 18:38:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.273 18:38:04 -- common/autotest_common.sh@852 -- # return 0 00:21:57.273 18:38:04 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.273 18:38:04 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.273 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.273 18:38:04 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.273 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.273 18:38:04 -- host/discovery.sh@72 -- # notify_id=0 00:21:57.273 18:38:04 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # sort 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # xargs 00:21:57.273 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.273 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.273 18:38:04 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:57.273 18:38:04 -- host/discovery.sh@79 -- # get_bdev_list 00:21:57.273 18:38:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.273 18:38:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.273 18:38:04 -- host/discovery.sh@55 -- # sort 00:21:57.273 18:38:04 -- host/discovery.sh@55 -- # xargs 00:21:57.273 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.273 18:38:04 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:57.273 18:38:04 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.273 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.273 18:38:04 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.273 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.273 18:38:04 -- host/discovery.sh@59 -- # sort 00:21:57.274 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.274 18:38:04 -- host/discovery.sh@59 -- # xargs 00:21:57.274 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.274 18:38:04 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:57.274 18:38:04 -- host/discovery.sh@83 -- # get_bdev_list 00:21:57.274 18:38:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.274 18:38:04 -- host/discovery.sh@55 -- # xargs 00:21:57.274 18:38:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.274 18:38:04 -- host/discovery.sh@55 -- # sort 00:21:57.274 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.274 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.274 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.274 18:38:04 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:57.274 18:38:04 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:57.274 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.274 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.274 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.274 18:38:04 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:57.274 18:38:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.274 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.274 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.274 18:38:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.274 18:38:04 -- host/discovery.sh@59 -- # sort 00:21:57.274 18:38:04 -- host/discovery.sh@59 -- # xargs 00:21:57.274 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.532 18:38:04 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:57.532 18:38:04 -- host/discovery.sh@87 -- # get_bdev_list 00:21:57.532 18:38:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.532 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.532 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.532 18:38:04 -- host/discovery.sh@55 -- # sort 00:21:57.532 18:38:04 -- host/discovery.sh@55 -- # xargs 00:21:57.532 18:38:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.532 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.532 18:38:04 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:57.532 18:38:04 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.532 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.532 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.532 [2024-07-14 18:38:04.796812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.532 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.532 18:38:04 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:57.532 18:38:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.532 18:38:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.532 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.533 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.533 18:38:04 -- host/discovery.sh@59 -- # sort 00:21:57.533 18:38:04 -- host/discovery.sh@59 -- # xargs 00:21:57.533 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.533 18:38:04 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:57.533 18:38:04 -- host/discovery.sh@93 -- # get_bdev_list 00:21:57.533 18:38:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.533 18:38:04 -- host/discovery.sh@55 -- # sort 00:21:57.533 18:38:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.533 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.533 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.533 18:38:04 -- host/discovery.sh@55 -- # xargs 00:21:57.533 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.533 18:38:04 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:57.533 18:38:04 -- host/discovery.sh@94 -- # get_notification_count 00:21:57.533 18:38:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:57.533 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.533 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.533 18:38:04 -- host/discovery.sh@74 -- # jq '. | length' 00:21:57.533 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.533 18:38:04 -- host/discovery.sh@74 -- # notification_count=0 00:21:57.533 18:38:04 -- host/discovery.sh@75 -- # notify_id=0 00:21:57.790 18:38:04 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:57.790 18:38:04 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:57.790 18:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.790 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.790 18:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.790 18:38:04 -- host/discovery.sh@100 -- # sleep 1 00:21:58.048 [2024-07-14 18:38:05.429334] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:58.048 [2024-07-14 18:38:05.429391] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:58.048 [2024-07-14 18:38:05.429409] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:58.306 [2024-07-14 18:38:05.515449] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:58.306 [2024-07-14 18:38:05.571307] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:58.306 [2024-07-14 18:38:05.571334] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:58.563 18:38:05 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:58.563 18:38:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.563 18:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.563 18:38:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.563 18:38:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.563 18:38:05 -- host/discovery.sh@59 -- # sort 00:21:58.563 18:38:05 -- host/discovery.sh@59 -- # xargs 00:21:58.821 18:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@102 -- # get_bdev_list 00:21:58.821 18:38:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.821 18:38:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.821 18:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.821 18:38:06 -- common/autotest_common.sh@10 -- # set +x 00:21:58.821 18:38:06 -- host/discovery.sh@55 -- # sort 00:21:58.821 18:38:06 -- host/discovery.sh@55 -- # xargs 00:21:58.821 18:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:58.821 18:38:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:58.821 18:38:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:58.821 18:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.821 18:38:06 -- common/autotest_common.sh@10 -- # set +x 00:21:58.821 18:38:06 -- host/discovery.sh@63 -- # sort -n 00:21:58.821 18:38:06 -- host/discovery.sh@63 -- # xargs 00:21:58.821 18:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@104 -- # get_notification_count 00:21:58.821 18:38:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:58.821 18:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.821 18:38:06 -- common/autotest_common.sh@10 -- # set +x 00:21:58.821 18:38:06 -- host/discovery.sh@74 -- # jq '. | length' 00:21:58.821 18:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@74 -- # notification_count=1 00:21:58.821 18:38:06 -- host/discovery.sh@75 -- # notify_id=1 00:21:58.821 18:38:06 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:58.821 18:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.821 18:38:06 -- common/autotest_common.sh@10 -- # set +x 00:21:58.821 18:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.821 18:38:06 -- host/discovery.sh@109 -- # sleep 1 00:22:00.196 18:38:07 -- host/discovery.sh@110 -- # get_bdev_list 00:22:00.196 18:38:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.196 18:38:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:00.196 18:38:07 -- host/discovery.sh@55 -- # sort 00:22:00.196 18:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.196 18:38:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.196 18:38:07 -- host/discovery.sh@55 -- # xargs 00:22:00.196 18:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.196 18:38:07 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:00.196 18:38:07 -- host/discovery.sh@111 -- # get_notification_count 00:22:00.196 18:38:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:00.196 18:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.196 18:38:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.196 18:38:07 -- host/discovery.sh@74 -- # jq '. | length' 00:22:00.196 18:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.196 18:38:07 -- host/discovery.sh@74 -- # notification_count=1 00:22:00.196 18:38:07 -- host/discovery.sh@75 -- # notify_id=2 00:22:00.196 18:38:07 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:00.196 18:38:07 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:00.196 18:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.196 18:38:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.196 [2024-07-14 18:38:07.335232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.196 [2024-07-14 18:38:07.335878] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:00.196 [2024-07-14 18:38:07.335931] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:00.196 18:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.196 18:38:07 -- host/discovery.sh@117 -- # sleep 1 00:22:00.196 [2024-07-14 18:38:07.421981] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:00.196 [2024-07-14 18:38:07.481258] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:00.196 [2024-07-14 18:38:07.481282] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:00.196 [2024-07-14 18:38:07.481304] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:01.132 18:38:08 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:01.132 18:38:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.132 18:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.132 18:38:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.132 18:38:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.132 18:38:08 -- host/discovery.sh@59 -- # sort 00:22:01.132 18:38:08 -- host/discovery.sh@59 -- # xargs 00:22:01.132 18:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@119 -- # get_bdev_list 00:22:01.132 18:38:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.132 18:38:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.132 18:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.132 18:38:08 -- host/discovery.sh@55 -- # sort 00:22:01.132 18:38:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.132 18:38:08 -- host/discovery.sh@55 -- # xargs 00:22:01.132 18:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:01.132 18:38:08 -- host/discovery.sh@63 -- # xargs 00:22:01.132 18:38:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:01.132 18:38:08 -- host/discovery.sh@63 -- # sort -n 00:22:01.132 18:38:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:01.132 18:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.132 18:38:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.132 18:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:01.132 18:38:08 -- host/discovery.sh@121 -- # get_notification_count 00:22:01.132 18:38:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:01.132 18:38:08 -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.132 18:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.132 18:38:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.132 18:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.391 18:38:08 -- host/discovery.sh@74 -- # notification_count=0 00:22:01.391 18:38:08 -- host/discovery.sh@75 -- # notify_id=2 00:22:01.391 18:38:08 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:01.391 18:38:08 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.391 18:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.391 18:38:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.391 [2024-07-14 18:38:08.576601] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:01.391 [2024-07-14 18:38:08.576655] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:01.391 [2024-07-14 18:38:08.579150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.391 [2024-07-14 18:38:08.579200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.391 [2024-07-14 18:38:08.579229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.391 [2024-07-14 18:38:08.579238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.391 [2024-07-14 18:38:08.579247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.391 [2024-07-14 18:38:08.579255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.391 [2024-07-14 18:38:08.579264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.391 [2024-07-14 18:38:08.579273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.391 [2024-07-14 18:38:08.579281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.391 18:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.391 18:38:08 -- host/discovery.sh@127 -- # sleep 1 00:22:01.391 [2024-07-14 18:38:08.589095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.391 [2024-07-14 18:38:08.599112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.391 [2024-07-14 18:38:08.599254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.391 [2024-07-14 18:38:08.599299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.391 [2024-07-14 18:38:08.599314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.391 [2024-07-14 18:38:08.599324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.391 [2024-07-14 18:38:08.599339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.391 [2024-07-14 18:38:08.599353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.391 [2024-07-14 18:38:08.599360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.391 [2024-07-14 18:38:08.599385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.391 [2024-07-14 18:38:08.599416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.391 [2024-07-14 18:38:08.609204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.391 [2024-07-14 18:38:08.609298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.391 [2024-07-14 18:38:08.609343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.391 [2024-07-14 18:38:08.609359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.391 [2024-07-14 18:38:08.609369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.391 [2024-07-14 18:38:08.609385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.391 [2024-07-14 18:38:08.609399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.391 [2024-07-14 18:38:08.609407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.391 [2024-07-14 18:38:08.609416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.391 [2024-07-14 18:38:08.609430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.391 [2024-07-14 18:38:08.619252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.391 [2024-07-14 18:38:08.619346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.619390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.619406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.392 [2024-07-14 18:38:08.619415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.392 [2024-07-14 18:38:08.619430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.392 [2024-07-14 18:38:08.619444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.392 [2024-07-14 18:38:08.619452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.392 [2024-07-14 18:38:08.619460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.392 [2024-07-14 18:38:08.619473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.392 [2024-07-14 18:38:08.629317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.392 [2024-07-14 18:38:08.629430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.629475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.629490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.392 [2024-07-14 18:38:08.629500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.392 [2024-07-14 18:38:08.629558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.392 [2024-07-14 18:38:08.629571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.392 [2024-07-14 18:38:08.629579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.392 [2024-07-14 18:38:08.629587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.392 [2024-07-14 18:38:08.629615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.392 [2024-07-14 18:38:08.639398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.392 [2024-07-14 18:38:08.639527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.639598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.639616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.392 [2024-07-14 18:38:08.639626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.392 [2024-07-14 18:38:08.639642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.392 [2024-07-14 18:38:08.639666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.392 [2024-07-14 18:38:08.639674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.392 [2024-07-14 18:38:08.639683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.392 [2024-07-14 18:38:08.639708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.392 [2024-07-14 18:38:08.649507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.392 [2024-07-14 18:38:08.649626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.649668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.649683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.392 [2024-07-14 18:38:08.649692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.392 [2024-07-14 18:38:08.649705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.392 [2024-07-14 18:38:08.649727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.392 [2024-07-14 18:38:08.649736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.392 [2024-07-14 18:38:08.649744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.392 [2024-07-14 18:38:08.649756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.392 [2024-07-14 18:38:08.659600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.392 [2024-07-14 18:38:08.659691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.659734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.392 [2024-07-14 18:38:08.659750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004150 with addr=10.0.0.2, port=4420 00:22:01.392 [2024-07-14 18:38:08.659759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004150 is same with the state(5) to be set 00:22:01.392 [2024-07-14 18:38:08.659774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004150 (9): Bad file descriptor 00:22:01.392 [2024-07-14 18:38:08.659796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.392 [2024-07-14 18:38:08.659805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.392 [2024-07-14 18:38:08.659814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.392 [2024-07-14 18:38:08.659842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.392 [2024-07-14 18:38:08.663802] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:01.392 [2024-07-14 18:38:08.663831] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:02.326 18:38:09 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:02.326 18:38:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.326 18:38:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:02.326 18:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.326 18:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.326 18:38:09 -- host/discovery.sh@59 -- # sort 00:22:02.326 18:38:09 -- host/discovery.sh@59 -- # xargs 00:22:02.326 18:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.326 18:38:09 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.326 18:38:09 -- host/discovery.sh@129 -- # get_bdev_list 00:22:02.326 18:38:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.326 18:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.326 18:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.326 18:38:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.326 18:38:09 -- host/discovery.sh@55 -- # sort 00:22:02.326 18:38:09 -- host/discovery.sh@55 -- # xargs 00:22:02.326 18:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.326 18:38:09 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:02.326 18:38:09 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:02.326 18:38:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:02.326 18:38:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:02.326 18:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.326 18:38:09 -- host/discovery.sh@63 -- # sort -n 00:22:02.326 18:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.326 18:38:09 -- host/discovery.sh@63 -- # xargs 00:22:02.326 18:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.586 18:38:09 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:02.586 18:38:09 -- host/discovery.sh@131 -- # get_notification_count 00:22:02.586 18:38:09 -- host/discovery.sh@74 -- # jq '. | length' 00:22:02.586 18:38:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:02.586 18:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.586 18:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.586 18:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.586 18:38:09 -- host/discovery.sh@74 -- # notification_count=0 00:22:02.586 18:38:09 -- host/discovery.sh@75 -- # notify_id=2 00:22:02.586 18:38:09 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:02.586 18:38:09 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:02.586 18:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.586 18:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.586 18:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.586 18:38:09 -- host/discovery.sh@135 -- # sleep 1 00:22:03.522 18:38:10 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:03.522 18:38:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.522 18:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.522 18:38:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.522 18:38:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.522 18:38:10 -- host/discovery.sh@59 -- # xargs 00:22:03.522 18:38:10 -- host/discovery.sh@59 -- # sort 00:22:03.522 18:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.522 18:38:10 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:03.522 18:38:10 -- host/discovery.sh@137 -- # get_bdev_list 00:22:03.522 18:38:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.522 18:38:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.522 18:38:10 -- host/discovery.sh@55 -- # sort 00:22:03.522 18:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.522 18:38:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.522 18:38:10 -- host/discovery.sh@55 -- # xargs 00:22:03.522 18:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.780 18:38:10 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:03.780 18:38:10 -- host/discovery.sh@138 -- # get_notification_count 00:22:03.780 18:38:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:03.780 18:38:10 -- host/discovery.sh@74 -- # jq '. | length' 00:22:03.780 18:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.780 18:38:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.780 18:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.780 18:38:11 -- host/discovery.sh@74 -- # notification_count=2 00:22:03.780 18:38:11 -- host/discovery.sh@75 -- # notify_id=4 00:22:03.780 18:38:11 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:03.781 18:38:11 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:03.781 18:38:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.781 18:38:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.718 [2024-07-14 18:38:12.020115] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.718 [2024-07-14 18:38:12.020140] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.718 [2024-07-14 18:38:12.020176] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.718 [2024-07-14 18:38:12.106239] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:04.977 [2024-07-14 18:38:12.165397] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.977 [2024-07-14 18:38:12.165452] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.977 18:38:12 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:04.977 18:38:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.977 18:38:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 2024/07/14 18:38:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:04.977 request: 00:22:04.977 { 00:22:04.977 "method": "bdev_nvme_start_discovery", 00:22:04.977 "params": { 00:22:04.977 "name": "nvme", 00:22:04.977 "trtype": "tcp", 00:22:04.977 "traddr": "10.0.0.2", 00:22:04.977 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:04.977 "adrfam": "ipv4", 00:22:04.977 "trsvcid": "8009", 00:22:04.977 "wait_for_attach": true 00:22:04.977 } 00:22:04.977 } 00:22:04.977 Got JSON-RPC error response 00:22:04.977 GoRPCClient: error on JSON-RPC call 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:04.977 18:38:12 -- common/autotest_common.sh@643 -- # es=1 00:22:04.977 18:38:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:04.977 18:38:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:04.977 18:38:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:04.977 18:38:12 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # sort 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # xargs 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.977 18:38:12 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:04.977 18:38:12 -- host/discovery.sh@147 -- # get_bdev_list 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # sort 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # xargs 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.977 18:38:12 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:04.977 18:38:12 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:04.977 18:38:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:04.977 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.977 18:38:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 2024/07/14 18:38:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:04.977 request: 00:22:04.977 { 00:22:04.977 "method": "bdev_nvme_start_discovery", 00:22:04.977 "params": { 00:22:04.977 "name": "nvme_second", 00:22:04.977 "trtype": "tcp", 00:22:04.977 "traddr": "10.0.0.2", 00:22:04.977 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:04.977 "adrfam": "ipv4", 00:22:04.977 "trsvcid": "8009", 00:22:04.977 "wait_for_attach": true 00:22:04.977 } 00:22:04.977 } 00:22:04.977 Got JSON-RPC error response 00:22:04.977 GoRPCClient: error on JSON-RPC call 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:04.977 18:38:12 -- common/autotest_common.sh@643 -- # es=1 00:22:04.977 18:38:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:04.977 18:38:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:04.977 18:38:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:04.977 18:38:12 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # sort 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 18:38:12 -- host/discovery.sh@67 -- # xargs 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.977 18:38:12 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:04.977 18:38:12 -- host/discovery.sh@153 -- # get_bdev_list 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.977 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # sort 00:22:04.977 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 18:38:12 -- host/discovery.sh@55 -- # xargs 00:22:04.977 18:38:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.236 18:38:12 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.236 18:38:12 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.236 18:38:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:05.236 18:38:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.236 18:38:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:05.236 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:05.236 18:38:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:05.236 18:38:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:05.236 18:38:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.236 18:38:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.236 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:06.171 [2024-07-14 18:38:13.431418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.171 [2024-07-14 18:38:13.431546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.171 [2024-07-14 18:38:13.431566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaa000 with addr=10.0.0.2, port=8010 00:22:06.171 [2024-07-14 18:38:13.431630] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:06.171 [2024-07-14 18:38:13.431640] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:06.171 [2024-07-14 18:38:13.431649] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:07.104 [2024-07-14 18:38:14.431416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.104 [2024-07-14 18:38:14.431550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.104 [2024-07-14 18:38:14.431577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaa000 with addr=10.0.0.2, port=8010 00:22:07.104 [2024-07-14 18:38:14.431615] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:07.104 [2024-07-14 18:38:14.431625] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:07.104 [2024-07-14 18:38:14.431634] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:08.039 [2024-07-14 18:38:15.431304] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:08.039 2024/07/14 18:38:15 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:08.039 request: 00:22:08.039 { 00:22:08.039 "method": "bdev_nvme_start_discovery", 00:22:08.039 "params": { 00:22:08.039 "name": "nvme_second", 00:22:08.039 "trtype": "tcp", 00:22:08.039 "traddr": "10.0.0.2", 00:22:08.039 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.039 "adrfam": "ipv4", 00:22:08.039 "trsvcid": "8010", 00:22:08.039 "attach_timeout_ms": 3000 00:22:08.039 } 00:22:08.039 } 00:22:08.039 Got JSON-RPC error response 00:22:08.039 GoRPCClient: error on JSON-RPC call 00:22:08.039 18:38:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:08.039 18:38:15 -- common/autotest_common.sh@643 -- # es=1 00:22:08.039 18:38:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:08.039 18:38:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:08.039 18:38:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:08.039 18:38:15 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:08.039 18:38:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.039 18:38:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.039 18:38:15 -- host/discovery.sh@67 -- # sort 00:22:08.039 18:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.039 18:38:15 -- common/autotest_common.sh@10 -- # set +x 00:22:08.039 18:38:15 -- host/discovery.sh@67 -- # xargs 00:22:08.039 18:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.297 18:38:15 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:08.297 18:38:15 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:08.297 18:38:15 -- host/discovery.sh@162 -- # kill 95802 00:22:08.297 18:38:15 -- host/discovery.sh@163 -- # nvmftestfini 00:22:08.297 18:38:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:08.297 18:38:15 -- nvmf/common.sh@116 -- # sync 00:22:08.297 18:38:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:08.297 18:38:15 -- nvmf/common.sh@119 -- # set +e 00:22:08.297 18:38:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:08.297 18:38:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:08.297 rmmod nvme_tcp 00:22:08.297 rmmod nvme_fabrics 00:22:08.297 rmmod nvme_keyring 00:22:08.297 18:38:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:08.297 18:38:15 -- nvmf/common.sh@123 -- # set -e 00:22:08.297 18:38:15 -- nvmf/common.sh@124 -- # return 0 00:22:08.297 18:38:15 -- nvmf/common.sh@477 -- # '[' -n 95751 ']' 00:22:08.297 18:38:15 -- nvmf/common.sh@478 -- # killprocess 95751 00:22:08.297 18:38:15 -- common/autotest_common.sh@926 -- # '[' -z 95751 ']' 00:22:08.297 18:38:15 -- common/autotest_common.sh@930 -- # kill -0 95751 00:22:08.297 18:38:15 -- common/autotest_common.sh@931 -- # uname 00:22:08.297 18:38:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.297 18:38:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95751 00:22:08.297 18:38:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:08.297 18:38:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:08.297 killing process with pid 95751 00:22:08.297 18:38:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95751' 00:22:08.297 18:38:15 -- common/autotest_common.sh@945 -- # kill 95751 00:22:08.297 18:38:15 -- common/autotest_common.sh@950 -- # wait 95751 00:22:08.556 18:38:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:08.556 18:38:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:08.556 18:38:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:08.556 18:38:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.556 18:38:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:08.556 18:38:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.556 18:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.556 18:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.556 18:38:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:08.556 00:22:08.556 real 0m14.141s 00:22:08.556 user 0m27.795s 00:22:08.556 sys 0m1.735s 00:22:08.556 ************************************ 00:22:08.556 END TEST nvmf_discovery 00:22:08.556 ************************************ 00:22:08.556 18:38:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.556 18:38:15 -- common/autotest_common.sh@10 -- # set +x 00:22:08.556 18:38:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:08.556 18:38:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:08.556 18:38:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.556 18:38:15 -- common/autotest_common.sh@10 -- # set +x 00:22:08.556 ************************************ 00:22:08.556 START TEST nvmf_discovery_remove_ifc 00:22:08.556 ************************************ 00:22:08.556 18:38:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:08.814 * Looking for test storage... 00:22:08.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:08.814 18:38:15 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:08.814 18:38:15 -- nvmf/common.sh@7 -- # uname -s 00:22:08.814 18:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.814 18:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.814 18:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.814 18:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.814 18:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.814 18:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.814 18:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.814 18:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.814 18:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.814 18:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:22:08.814 18:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:22:08.814 18:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.814 18:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.814 18:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:08.814 18:38:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:08.814 18:38:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.814 18:38:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.814 18:38:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.814 18:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.814 18:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.814 18:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.814 18:38:16 -- paths/export.sh@5 -- # export PATH 00:22:08.814 18:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.814 18:38:16 -- nvmf/common.sh@46 -- # : 0 00:22:08.814 18:38:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:08.814 18:38:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:08.814 18:38:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:08.814 18:38:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.814 18:38:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.814 18:38:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:08.814 18:38:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:08.814 18:38:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:08.814 18:38:16 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:08.814 18:38:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:08.814 18:38:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.814 18:38:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:08.814 18:38:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:08.814 18:38:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:08.814 18:38:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.814 18:38:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.814 18:38:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.814 18:38:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:08.814 18:38:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:08.814 18:38:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.814 18:38:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.814 18:38:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:08.814 18:38:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:08.814 18:38:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:08.814 18:38:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:08.814 18:38:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:08.814 18:38:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.814 18:38:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:08.814 18:38:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:08.814 18:38:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:08.814 18:38:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:08.814 18:38:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:08.814 18:38:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:08.814 Cannot find device "nvmf_tgt_br" 00:22:08.814 18:38:16 -- nvmf/common.sh@154 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:08.814 Cannot find device "nvmf_tgt_br2" 00:22:08.814 18:38:16 -- nvmf/common.sh@155 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:08.814 18:38:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:08.814 Cannot find device "nvmf_tgt_br" 00:22:08.814 18:38:16 -- nvmf/common.sh@157 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:08.814 Cannot find device "nvmf_tgt_br2" 00:22:08.814 18:38:16 -- nvmf/common.sh@158 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:08.814 18:38:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:08.814 18:38:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.814 18:38:16 -- nvmf/common.sh@161 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.814 18:38:16 -- nvmf/common.sh@162 -- # true 00:22:08.814 18:38:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:08.814 18:38:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:08.814 18:38:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:08.814 18:38:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:08.814 18:38:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:08.814 18:38:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:08.815 18:38:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:08.815 18:38:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.815 18:38:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:08.815 18:38:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:08.815 18:38:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:09.073 18:38:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:09.073 18:38:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:09.073 18:38:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:09.073 18:38:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:09.073 18:38:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:09.073 18:38:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:09.073 18:38:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:09.073 18:38:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:09.073 18:38:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:09.073 18:38:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:09.073 18:38:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:09.073 18:38:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:09.073 18:38:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:09.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:22:09.073 00:22:09.073 --- 10.0.0.2 ping statistics --- 00:22:09.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.073 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:09.073 18:38:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:09.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:09.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:22:09.073 00:22:09.073 --- 10.0.0.3 ping statistics --- 00:22:09.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.073 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:09.073 18:38:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:09.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:09.073 00:22:09.073 --- 10.0.0.1 ping statistics --- 00:22:09.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.073 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:09.073 18:38:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.073 18:38:16 -- nvmf/common.sh@421 -- # return 0 00:22:09.073 18:38:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:09.073 18:38:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.073 18:38:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:09.073 18:38:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:09.073 18:38:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.073 18:38:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:09.073 18:38:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:09.073 18:38:16 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:09.073 18:38:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:09.073 18:38:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:09.073 18:38:16 -- common/autotest_common.sh@10 -- # set +x 00:22:09.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.073 18:38:16 -- nvmf/common.sh@469 -- # nvmfpid=96307 00:22:09.073 18:38:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.073 18:38:16 -- nvmf/common.sh@470 -- # waitforlisten 96307 00:22:09.073 18:38:16 -- common/autotest_common.sh@819 -- # '[' -z 96307 ']' 00:22:09.073 18:38:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.073 18:38:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.073 18:38:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.073 18:38:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.073 18:38:16 -- common/autotest_common.sh@10 -- # set +x 00:22:09.073 [2024-07-14 18:38:16.419141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:09.073 [2024-07-14 18:38:16.419379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.331 [2024-07-14 18:38:16.560283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.331 [2024-07-14 18:38:16.628432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:09.331 [2024-07-14 18:38:16.628845] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.331 [2024-07-14 18:38:16.628992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.331 [2024-07-14 18:38:16.629111] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.331 [2024-07-14 18:38:16.629311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.264 18:38:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.264 18:38:17 -- common/autotest_common.sh@852 -- # return 0 00:22:10.264 18:38:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:10.264 18:38:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:10.264 18:38:17 -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 18:38:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.264 18:38:17 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:10.264 18:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.264 18:38:17 -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 [2024-07-14 18:38:17.476591] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.264 [2024-07-14 18:38:17.484693] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:10.264 null0 00:22:10.264 [2024-07-14 18:38:17.516660] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.264 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:10.264 18:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.264 18:38:17 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96357 00:22:10.264 18:38:17 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:10.264 18:38:17 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96357 /tmp/host.sock 00:22:10.264 18:38:17 -- common/autotest_common.sh@819 -- # '[' -z 96357 ']' 00:22:10.264 18:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:10.265 18:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.265 18:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:10.265 18:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.265 18:38:17 -- common/autotest_common.sh@10 -- # set +x 00:22:10.265 [2024-07-14 18:38:17.593787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:10.265 [2024-07-14 18:38:17.594104] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96357 ] 00:22:10.522 [2024-07-14 18:38:17.738656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.522 [2024-07-14 18:38:17.806759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.522 [2024-07-14 18:38:17.807120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.088 18:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.088 18:38:18 -- common/autotest_common.sh@852 -- # return 0 00:22:11.088 18:38:18 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.088 18:38:18 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:11.088 18:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.088 18:38:18 -- common/autotest_common.sh@10 -- # set +x 00:22:11.347 18:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.347 18:38:18 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:11.347 18:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.347 18:38:18 -- common/autotest_common.sh@10 -- # set +x 00:22:11.347 18:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.347 18:38:18 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:11.347 18:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.347 18:38:18 -- common/autotest_common.sh@10 -- # set +x 00:22:12.281 [2024-07-14 18:38:19.619459] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.281 [2024-07-14 18:38:19.619484] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.281 [2024-07-14 18:38:19.619528] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.539 [2024-07-14 18:38:19.706271] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:12.539 [2024-07-14 18:38:19.762000] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:12.539 [2024-07-14 18:38:19.762043] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:12.539 [2024-07-14 18:38:19.762068] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:12.539 [2024-07-14 18:38:19.762082] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.539 [2024-07-14 18:38:19.762102] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.539 18:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.539 [2024-07-14 18:38:19.768239] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1783530 was disconnected and freed. delete nvme_qpair. 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.539 18:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.539 18:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.539 18:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.539 18:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.539 18:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.539 18:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:12.539 18:38:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.911 18:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.911 18:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.911 18:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:13.911 18:38:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.844 18:38:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.844 18:38:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.844 18:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.844 18:38:21 -- common/autotest_common.sh@10 -- # set +x 00:22:14.844 18:38:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.844 18:38:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.844 18:38:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.844 18:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.844 18:38:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:14.844 18:38:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.800 18:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.800 18:38:23 -- common/autotest_common.sh@10 -- # set +x 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.800 18:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:15.800 18:38:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.732 18:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:16.732 18:38:24 -- common/autotest_common.sh@10 -- # set +x 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.732 18:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:16.732 18:38:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.106 18:38:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:18.106 18:38:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.106 18:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.106 18:38:25 -- common/autotest_common.sh@10 -- # set +x 00:22:18.106 18:38:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:18.106 18:38:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:18.106 18:38:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:18.106 18:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.106 [2024-07-14 18:38:25.190522] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:18.106 [2024-07-14 18:38:25.190779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.106 [2024-07-14 18:38:25.191005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.106 [2024-07-14 18:38:25.191201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.107 [2024-07-14 18:38:25.191216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.107 [2024-07-14 18:38:25.191226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.107 [2024-07-14 18:38:25.191235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.107 [2024-07-14 18:38:25.191245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.107 [2024-07-14 18:38:25.191253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.107 [2024-07-14 18:38:25.191263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.107 [2024-07-14 18:38:25.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.107 [2024-07-14 18:38:25.191281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1749c50 is same with the state(5) to be set 00:22:18.107 [2024-07-14 18:38:25.200521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749c50 (9): Bad file descriptor 00:22:18.107 18:38:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:18.107 18:38:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.107 [2024-07-14 18:38:25.210544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:19.040 18:38:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.040 18:38:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.040 18:38:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.040 18:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.040 18:38:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.040 18:38:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.040 18:38:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.040 [2024-07-14 18:38:26.241598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:19.974 [2024-07-14 18:38:27.265628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:19.974 [2024-07-14 18:38:27.265758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1749c50 with addr=10.0.0.2, port=4420 00:22:19.974 [2024-07-14 18:38:27.265795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1749c50 is same with the state(5) to be set 00:22:19.974 [2024-07-14 18:38:27.265850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:19.974 [2024-07-14 18:38:27.265874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:19.974 [2024-07-14 18:38:27.265893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:19.974 [2024-07-14 18:38:27.265913] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:19.974 [2024-07-14 18:38:27.266763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749c50 (9): Bad file descriptor 00:22:19.974 [2024-07-14 18:38:27.266828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:19.974 [2024-07-14 18:38:27.266877] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:19.974 [2024-07-14 18:38:27.266946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.974 [2024-07-14 18:38:27.266977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.974 [2024-07-14 18:38:27.267003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.974 [2024-07-14 18:38:27.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.974 [2024-07-14 18:38:27.267046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.974 [2024-07-14 18:38:27.267067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.974 [2024-07-14 18:38:27.267089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.974 [2024-07-14 18:38:27.267110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.974 [2024-07-14 18:38:27.267140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.974 [2024-07-14 18:38:27.267160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.974 [2024-07-14 18:38:27.267181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:19.974 [2024-07-14 18:38:27.267241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174a060 (9): Bad file descriptor 00:22:19.974 [2024-07-14 18:38:27.268243] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:19.974 [2024-07-14 18:38:27.268293] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:19.974 18:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.974 18:38:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:19.974 18:38:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:20.909 18:38:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.909 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.909 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.909 18:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.909 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.909 18:38:28 -- common/autotest_common.sh@10 -- # set +x 00:22:20.909 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.909 18:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:21.167 18:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.167 18:38:28 -- common/autotest_common.sh@10 -- # set +x 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:21.167 18:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:21.167 18:38:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.104 [2024-07-14 18:38:29.271220] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:22.104 [2024-07-14 18:38:29.271248] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:22.104 [2024-07-14 18:38:29.271281] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.104 [2024-07-14 18:38:29.357302] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:22.104 [2024-07-14 18:38:29.412732] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:22.104 [2024-07-14 18:38:29.412796] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:22.104 [2024-07-14 18:38:29.412818] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:22.104 [2024-07-14 18:38:29.412833] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:22.104 [2024-07-14 18:38:29.412841] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:22.104 [2024-07-14 18:38:29.420039] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1756b00 was disconnected and freed. delete nvme_qpair. 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.104 18:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.104 18:38:29 -- common/autotest_common.sh@10 -- # set +x 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.104 18:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:22.104 18:38:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96357 00:22:22.104 18:38:29 -- common/autotest_common.sh@926 -- # '[' -z 96357 ']' 00:22:22.104 18:38:29 -- common/autotest_common.sh@930 -- # kill -0 96357 00:22:22.104 18:38:29 -- common/autotest_common.sh@931 -- # uname 00:22:22.104 18:38:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.104 18:38:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96357 00:22:22.104 18:38:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:22.104 18:38:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:22.104 killing process with pid 96357 00:22:22.104 18:38:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96357' 00:22:22.104 18:38:29 -- common/autotest_common.sh@945 -- # kill 96357 00:22:22.104 18:38:29 -- common/autotest_common.sh@950 -- # wait 96357 00:22:22.363 18:38:29 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:22.363 18:38:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:22.363 18:38:29 -- nvmf/common.sh@116 -- # sync 00:22:22.363 18:38:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:22.363 18:38:29 -- nvmf/common.sh@119 -- # set +e 00:22:22.363 18:38:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:22.363 18:38:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:22.621 rmmod nvme_tcp 00:22:22.621 rmmod nvme_fabrics 00:22:22.621 rmmod nvme_keyring 00:22:22.621 18:38:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:22.621 18:38:29 -- nvmf/common.sh@123 -- # set -e 00:22:22.621 18:38:29 -- nvmf/common.sh@124 -- # return 0 00:22:22.621 18:38:29 -- nvmf/common.sh@477 -- # '[' -n 96307 ']' 00:22:22.621 18:38:29 -- nvmf/common.sh@478 -- # killprocess 96307 00:22:22.621 18:38:29 -- common/autotest_common.sh@926 -- # '[' -z 96307 ']' 00:22:22.621 18:38:29 -- common/autotest_common.sh@930 -- # kill -0 96307 00:22:22.621 18:38:29 -- common/autotest_common.sh@931 -- # uname 00:22:22.621 18:38:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.621 18:38:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96307 00:22:22.621 18:38:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:22.621 18:38:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:22.621 killing process with pid 96307 00:22:22.621 18:38:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96307' 00:22:22.621 18:38:29 -- common/autotest_common.sh@945 -- # kill 96307 00:22:22.621 18:38:29 -- common/autotest_common.sh@950 -- # wait 96307 00:22:22.880 18:38:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:22.880 18:38:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:22.880 18:38:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:22.880 18:38:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.880 18:38:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:22.880 18:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.880 18:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.880 18:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.880 18:38:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:22.880 ************************************ 00:22:22.880 END TEST nvmf_discovery_remove_ifc 00:22:22.880 ************************************ 00:22:22.880 00:22:22.880 real 0m14.182s 00:22:22.880 user 0m24.359s 00:22:22.880 sys 0m1.583s 00:22:22.880 18:38:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.880 18:38:30 -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 18:38:30 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:22.880 18:38:30 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:22.880 18:38:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:22.880 18:38:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:22.880 18:38:30 -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 ************************************ 00:22:22.880 START TEST nvmf_digest 00:22:22.880 ************************************ 00:22:22.880 18:38:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:22.880 * Looking for test storage... 00:22:22.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:22.881 18:38:30 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.881 18:38:30 -- nvmf/common.sh@7 -- # uname -s 00:22:22.881 18:38:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.881 18:38:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.881 18:38:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.881 18:38:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.881 18:38:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.881 18:38:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.881 18:38:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.881 18:38:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.881 18:38:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.881 18:38:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:22:22.881 18:38:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:22:22.881 18:38:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.881 18:38:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.881 18:38:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.881 18:38:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.881 18:38:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.881 18:38:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.881 18:38:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.881 18:38:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.881 18:38:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.881 18:38:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.881 18:38:30 -- paths/export.sh@5 -- # export PATH 00:22:22.881 18:38:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.881 18:38:30 -- nvmf/common.sh@46 -- # : 0 00:22:22.881 18:38:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:22.881 18:38:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:22.881 18:38:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:22.881 18:38:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.881 18:38:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.881 18:38:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:22.881 18:38:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:22.881 18:38:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:22.881 18:38:30 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:22.881 18:38:30 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:22.881 18:38:30 -- host/digest.sh@16 -- # runtime=2 00:22:22.881 18:38:30 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:22.881 18:38:30 -- host/digest.sh@132 -- # nvmftestinit 00:22:22.881 18:38:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:22.881 18:38:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.881 18:38:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:22.881 18:38:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:22.881 18:38:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:22.881 18:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.881 18:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.881 18:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.881 18:38:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:22.881 18:38:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:22.881 18:38:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.881 18:38:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.881 18:38:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:22.881 18:38:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:22.881 18:38:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.881 18:38:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.881 18:38:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.881 18:38:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.881 18:38:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.881 18:38:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.881 18:38:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.881 18:38:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.881 18:38:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:22.881 18:38:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:22.881 Cannot find device "nvmf_tgt_br" 00:22:22.881 18:38:30 -- nvmf/common.sh@154 -- # true 00:22:22.881 18:38:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.881 Cannot find device "nvmf_tgt_br2" 00:22:22.881 18:38:30 -- nvmf/common.sh@155 -- # true 00:22:22.881 18:38:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:22.881 18:38:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:23.140 Cannot find device "nvmf_tgt_br" 00:22:23.140 18:38:30 -- nvmf/common.sh@157 -- # true 00:22:23.140 18:38:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:23.140 Cannot find device "nvmf_tgt_br2" 00:22:23.140 18:38:30 -- nvmf/common.sh@158 -- # true 00:22:23.140 18:38:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:23.140 18:38:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:23.140 18:38:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:23.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.140 18:38:30 -- nvmf/common.sh@161 -- # true 00:22:23.140 18:38:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:23.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.140 18:38:30 -- nvmf/common.sh@162 -- # true 00:22:23.140 18:38:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:23.140 18:38:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:23.140 18:38:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:23.140 18:38:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:23.140 18:38:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:23.140 18:38:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:23.140 18:38:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:23.140 18:38:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:23.140 18:38:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:23.140 18:38:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:23.140 18:38:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:23.140 18:38:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:23.140 18:38:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:23.140 18:38:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:23.140 18:38:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:23.140 18:38:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:23.140 18:38:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:23.140 18:38:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:23.140 18:38:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:23.140 18:38:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:23.140 18:38:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:23.140 18:38:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:23.398 18:38:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:23.398 18:38:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:23.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:22:23.398 00:22:23.398 --- 10.0.0.2 ping statistics --- 00:22:23.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.398 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:23.399 18:38:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:23.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:23.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:22:23.399 00:22:23.399 --- 10.0.0.3 ping statistics --- 00:22:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.399 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:23.399 18:38:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:23.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:23.399 00:22:23.399 --- 10.0.0.1 ping statistics --- 00:22:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.399 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:23.399 18:38:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.399 18:38:30 -- nvmf/common.sh@421 -- # return 0 00:22:23.399 18:38:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:23.399 18:38:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.399 18:38:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:23.399 18:38:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:23.399 18:38:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.399 18:38:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:23.399 18:38:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:23.399 18:38:30 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:23.399 18:38:30 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:23.399 18:38:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:23.399 18:38:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:23.399 18:38:30 -- common/autotest_common.sh@10 -- # set +x 00:22:23.399 ************************************ 00:22:23.399 START TEST nvmf_digest_clean 00:22:23.399 ************************************ 00:22:23.399 18:38:30 -- common/autotest_common.sh@1104 -- # run_digest 00:22:23.399 18:38:30 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:23.399 18:38:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:23.399 18:38:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:23.399 18:38:30 -- common/autotest_common.sh@10 -- # set +x 00:22:23.399 18:38:30 -- nvmf/common.sh@469 -- # nvmfpid=96770 00:22:23.399 18:38:30 -- nvmf/common.sh@470 -- # waitforlisten 96770 00:22:23.399 18:38:30 -- common/autotest_common.sh@819 -- # '[' -z 96770 ']' 00:22:23.399 18:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.399 18:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.399 18:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.399 18:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.399 18:38:30 -- common/autotest_common.sh@10 -- # set +x 00:22:23.399 18:38:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:23.399 [2024-07-14 18:38:30.673037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:23.399 [2024-07-14 18:38:30.673144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.399 [2024-07-14 18:38:30.816011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.657 [2024-07-14 18:38:30.884077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:23.657 [2024-07-14 18:38:30.884238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.657 [2024-07-14 18:38:30.884254] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.657 [2024-07-14 18:38:30.884266] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.657 [2024-07-14 18:38:30.884303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.225 18:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.225 18:38:31 -- common/autotest_common.sh@852 -- # return 0 00:22:24.225 18:38:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:24.225 18:38:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:24.225 18:38:31 -- common/autotest_common.sh@10 -- # set +x 00:22:24.225 18:38:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.225 18:38:31 -- host/digest.sh@120 -- # common_target_config 00:22:24.225 18:38:31 -- host/digest.sh@43 -- # rpc_cmd 00:22:24.225 18:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.225 18:38:31 -- common/autotest_common.sh@10 -- # set +x 00:22:24.484 null0 00:22:24.484 [2024-07-14 18:38:31.737393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.484 [2024-07-14 18:38:31.761479] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.484 18:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.484 18:38:31 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:24.484 18:38:31 -- host/digest.sh@77 -- # local rw bs qd 00:22:24.484 18:38:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.484 18:38:31 -- host/digest.sh@80 -- # rw=randread 00:22:24.484 18:38:31 -- host/digest.sh@80 -- # bs=4096 00:22:24.484 18:38:31 -- host/digest.sh@80 -- # qd=128 00:22:24.484 18:38:31 -- host/digest.sh@82 -- # bperfpid=96820 00:22:24.484 18:38:31 -- host/digest.sh@83 -- # waitforlisten 96820 /var/tmp/bperf.sock 00:22:24.484 18:38:31 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:24.484 18:38:31 -- common/autotest_common.sh@819 -- # '[' -z 96820 ']' 00:22:24.484 18:38:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.484 18:38:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.484 18:38:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.484 18:38:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.484 18:38:31 -- common/autotest_common.sh@10 -- # set +x 00:22:24.484 [2024-07-14 18:38:31.822785] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:24.484 [2024-07-14 18:38:31.822866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96820 ] 00:22:24.743 [2024-07-14 18:38:31.965842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.743 [2024-07-14 18:38:32.043541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.678 18:38:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.678 18:38:32 -- common/autotest_common.sh@852 -- # return 0 00:22:25.678 18:38:32 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:25.678 18:38:32 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:25.678 18:38:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.964 18:38:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.964 18:38:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:26.221 nvme0n1 00:22:26.221 18:38:33 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:26.221 18:38:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:26.221 Running I/O for 2 seconds... 00:22:28.120 00:22:28.120 Latency(us) 00:22:28.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.120 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:28.120 nvme0n1 : 2.00 22797.09 89.05 0.00 0.00 5609.26 2606.55 16681.89 00:22:28.120 =================================================================================================================== 00:22:28.120 Total : 22797.09 89.05 0.00 0.00 5609.26 2606.55 16681.89 00:22:28.120 0 00:22:28.379 18:38:35 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:28.379 18:38:35 -- host/digest.sh@92 -- # get_accel_stats 00:22:28.379 18:38:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:28.379 18:38:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:28.379 | select(.opcode=="crc32c") 00:22:28.379 | "\(.module_name) \(.executed)"' 00:22:28.379 18:38:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:28.379 18:38:35 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:28.379 18:38:35 -- host/digest.sh@93 -- # exp_module=software 00:22:28.379 18:38:35 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:28.379 18:38:35 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:28.379 18:38:35 -- host/digest.sh@97 -- # killprocess 96820 00:22:28.379 18:38:35 -- common/autotest_common.sh@926 -- # '[' -z 96820 ']' 00:22:28.379 18:38:35 -- common/autotest_common.sh@930 -- # kill -0 96820 00:22:28.379 18:38:35 -- common/autotest_common.sh@931 -- # uname 00:22:28.379 18:38:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.379 18:38:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96820 00:22:28.379 18:38:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:28.379 18:38:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:28.379 killing process with pid 96820 00:22:28.379 18:38:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96820' 00:22:28.379 Received shutdown signal, test time was about 2.000000 seconds 00:22:28.379 00:22:28.379 Latency(us) 00:22:28.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.379 =================================================================================================================== 00:22:28.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.379 18:38:35 -- common/autotest_common.sh@945 -- # kill 96820 00:22:28.379 18:38:35 -- common/autotest_common.sh@950 -- # wait 96820 00:22:28.638 18:38:35 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:28.638 18:38:35 -- host/digest.sh@77 -- # local rw bs qd 00:22:28.638 18:38:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:28.638 18:38:35 -- host/digest.sh@80 -- # rw=randread 00:22:28.638 18:38:35 -- host/digest.sh@80 -- # bs=131072 00:22:28.638 18:38:35 -- host/digest.sh@80 -- # qd=16 00:22:28.638 18:38:36 -- host/digest.sh@82 -- # bperfpid=96915 00:22:28.638 18:38:36 -- host/digest.sh@83 -- # waitforlisten 96915 /var/tmp/bperf.sock 00:22:28.638 18:38:36 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:28.638 18:38:36 -- common/autotest_common.sh@819 -- # '[' -z 96915 ']' 00:22:28.638 18:38:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.638 18:38:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.638 18:38:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.638 18:38:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.638 18:38:36 -- common/autotest_common.sh@10 -- # set +x 00:22:28.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:28.638 Zero copy mechanism will not be used. 00:22:28.638 [2024-07-14 18:38:36.052518] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:28.638 [2024-07-14 18:38:36.052619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96915 ] 00:22:28.897 [2024-07-14 18:38:36.188159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.897 [2024-07-14 18:38:36.258519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.832 18:38:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.832 18:38:36 -- common/autotest_common.sh@852 -- # return 0 00:22:29.832 18:38:36 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:29.832 18:38:36 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:29.832 18:38:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:29.832 18:38:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.832 18:38:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.400 nvme0n1 00:22:30.400 18:38:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:30.400 18:38:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:30.400 Zero copy mechanism will not be used. 00:22:30.400 Running I/O for 2 seconds... 00:22:32.302 00:22:32.302 Latency(us) 00:22:32.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.302 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:32.302 nvme0n1 : 2.00 9620.54 1202.57 0.00 0.00 1660.33 781.96 9234.62 00:22:32.302 =================================================================================================================== 00:22:32.302 Total : 9620.54 1202.57 0.00 0.00 1660.33 781.96 9234.62 00:22:32.302 0 00:22:32.302 18:38:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:32.302 18:38:39 -- host/digest.sh@92 -- # get_accel_stats 00:22:32.302 18:38:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:32.302 18:38:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:32.302 18:38:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:32.302 | select(.opcode=="crc32c") 00:22:32.302 | "\(.module_name) \(.executed)"' 00:22:32.560 18:38:39 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:32.560 18:38:39 -- host/digest.sh@93 -- # exp_module=software 00:22:32.560 18:38:39 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:32.560 18:38:39 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:32.560 18:38:39 -- host/digest.sh@97 -- # killprocess 96915 00:22:32.560 18:38:39 -- common/autotest_common.sh@926 -- # '[' -z 96915 ']' 00:22:32.560 18:38:39 -- common/autotest_common.sh@930 -- # kill -0 96915 00:22:32.560 18:38:39 -- common/autotest_common.sh@931 -- # uname 00:22:32.560 18:38:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.560 18:38:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96915 00:22:32.560 18:38:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:32.560 18:38:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:32.560 killing process with pid 96915 00:22:32.560 18:38:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96915' 00:22:32.560 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.560 00:22:32.560 Latency(us) 00:22:32.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.560 =================================================================================================================== 00:22:32.560 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.560 18:38:39 -- common/autotest_common.sh@945 -- # kill 96915 00:22:32.560 18:38:39 -- common/autotest_common.sh@950 -- # wait 96915 00:22:32.818 18:38:40 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:32.818 18:38:40 -- host/digest.sh@77 -- # local rw bs qd 00:22:32.818 18:38:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:32.818 18:38:40 -- host/digest.sh@80 -- # rw=randwrite 00:22:32.818 18:38:40 -- host/digest.sh@80 -- # bs=4096 00:22:32.818 18:38:40 -- host/digest.sh@80 -- # qd=128 00:22:32.818 18:38:40 -- host/digest.sh@82 -- # bperfpid=97001 00:22:32.818 18:38:40 -- host/digest.sh@83 -- # waitforlisten 97001 /var/tmp/bperf.sock 00:22:32.818 18:38:40 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:32.818 18:38:40 -- common/autotest_common.sh@819 -- # '[' -z 97001 ']' 00:22:32.818 18:38:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.818 18:38:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.818 18:38:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.818 18:38:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.818 18:38:40 -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 [2024-07-14 18:38:40.193105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:32.818 [2024-07-14 18:38:40.193207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97001 ] 00:22:33.076 [2024-07-14 18:38:40.327534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.076 [2024-07-14 18:38:40.397939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.010 18:38:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:34.010 18:38:41 -- common/autotest_common.sh@852 -- # return 0 00:22:34.010 18:38:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:34.010 18:38:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:34.010 18:38:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:34.268 18:38:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.268 18:38:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.526 nvme0n1 00:22:34.526 18:38:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:34.526 18:38:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:34.526 Running I/O for 2 seconds... 00:22:36.426 00:22:36.426 Latency(us) 00:22:36.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.426 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:36.426 nvme0n1 : 2.00 25401.86 99.23 0.00 0.00 5034.04 2100.13 12094.37 00:22:36.426 =================================================================================================================== 00:22:36.426 Total : 25401.86 99.23 0.00 0.00 5034.04 2100.13 12094.37 00:22:36.426 0 00:22:36.684 18:38:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:36.684 18:38:43 -- host/digest.sh@92 -- # get_accel_stats 00:22:36.684 18:38:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:36.684 18:38:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:36.684 18:38:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:36.684 | select(.opcode=="crc32c") 00:22:36.684 | "\(.module_name) \(.executed)"' 00:22:36.684 18:38:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:36.684 18:38:44 -- host/digest.sh@93 -- # exp_module=software 00:22:36.684 18:38:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:36.684 18:38:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:36.684 18:38:44 -- host/digest.sh@97 -- # killprocess 97001 00:22:36.684 18:38:44 -- common/autotest_common.sh@926 -- # '[' -z 97001 ']' 00:22:36.684 18:38:44 -- common/autotest_common.sh@930 -- # kill -0 97001 00:22:36.684 18:38:44 -- common/autotest_common.sh@931 -- # uname 00:22:36.684 18:38:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:36.942 18:38:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97001 00:22:36.942 18:38:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:36.942 18:38:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:36.942 killing process with pid 97001 00:22:36.942 18:38:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97001' 00:22:36.942 18:38:44 -- common/autotest_common.sh@945 -- # kill 97001 00:22:36.942 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.942 00:22:36.942 Latency(us) 00:22:36.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.942 =================================================================================================================== 00:22:36.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.942 18:38:44 -- common/autotest_common.sh@950 -- # wait 97001 00:22:36.942 18:38:44 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:36.942 18:38:44 -- host/digest.sh@77 -- # local rw bs qd 00:22:36.942 18:38:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.942 18:38:44 -- host/digest.sh@80 -- # rw=randwrite 00:22:36.942 18:38:44 -- host/digest.sh@80 -- # bs=131072 00:22:36.942 18:38:44 -- host/digest.sh@80 -- # qd=16 00:22:36.942 18:38:44 -- host/digest.sh@82 -- # bperfpid=97086 00:22:36.942 18:38:44 -- host/digest.sh@83 -- # waitforlisten 97086 /var/tmp/bperf.sock 00:22:36.943 18:38:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:36.943 18:38:44 -- common/autotest_common.sh@819 -- # '[' -z 97086 ']' 00:22:36.943 18:38:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.943 18:38:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.943 18:38:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.943 18:38:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.943 18:38:44 -- common/autotest_common.sh@10 -- # set +x 00:22:37.200 [2024-07-14 18:38:44.373920] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:37.200 [2024-07-14 18:38:44.374161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97086 ] 00:22:37.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:37.200 Zero copy mechanism will not be used. 00:22:37.200 [2024-07-14 18:38:44.510842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.200 [2024-07-14 18:38:44.580835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.172 18:38:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.172 18:38:45 -- common/autotest_common.sh@852 -- # return 0 00:22:38.172 18:38:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:38.172 18:38:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:38.172 18:38:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:38.486 18:38:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.486 18:38:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.486 nvme0n1 00:22:38.744 18:38:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:38.744 18:38:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:38.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:38.744 Zero copy mechanism will not be used. 00:22:38.744 Running I/O for 2 seconds... 00:22:40.646 00:22:40.646 Latency(us) 00:22:40.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:40.646 nvme0n1 : 2.00 8358.27 1044.78 0.00 0.00 1909.84 1630.95 6553.60 00:22:40.646 =================================================================================================================== 00:22:40.646 Total : 8358.27 1044.78 0.00 0.00 1909.84 1630.95 6553.60 00:22:40.646 0 00:22:40.646 18:38:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:40.646 18:38:48 -- host/digest.sh@92 -- # get_accel_stats 00:22:40.646 18:38:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:40.646 18:38:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:40.646 | select(.opcode=="crc32c") 00:22:40.646 | "\(.module_name) \(.executed)"' 00:22:40.646 18:38:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:40.905 18:38:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:40.905 18:38:48 -- host/digest.sh@93 -- # exp_module=software 00:22:40.905 18:38:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:40.905 18:38:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:40.905 18:38:48 -- host/digest.sh@97 -- # killprocess 97086 00:22:40.905 18:38:48 -- common/autotest_common.sh@926 -- # '[' -z 97086 ']' 00:22:40.905 18:38:48 -- common/autotest_common.sh@930 -- # kill -0 97086 00:22:40.905 18:38:48 -- common/autotest_common.sh@931 -- # uname 00:22:40.905 18:38:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.905 18:38:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97086 00:22:40.905 killing process with pid 97086 00:22:40.905 Received shutdown signal, test time was about 2.000000 seconds 00:22:40.905 00:22:40.905 Latency(us) 00:22:40.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.905 =================================================================================================================== 00:22:40.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.905 18:38:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:40.905 18:38:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:40.905 18:38:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97086' 00:22:40.905 18:38:48 -- common/autotest_common.sh@945 -- # kill 97086 00:22:40.905 18:38:48 -- common/autotest_common.sh@950 -- # wait 97086 00:22:41.163 18:38:48 -- host/digest.sh@126 -- # killprocess 96770 00:22:41.163 18:38:48 -- common/autotest_common.sh@926 -- # '[' -z 96770 ']' 00:22:41.163 18:38:48 -- common/autotest_common.sh@930 -- # kill -0 96770 00:22:41.163 18:38:48 -- common/autotest_common.sh@931 -- # uname 00:22:41.163 18:38:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.163 18:38:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96770 00:22:41.163 killing process with pid 96770 00:22:41.163 18:38:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:41.163 18:38:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:41.163 18:38:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96770' 00:22:41.163 18:38:48 -- common/autotest_common.sh@945 -- # kill 96770 00:22:41.163 18:38:48 -- common/autotest_common.sh@950 -- # wait 96770 00:22:41.422 00:22:41.422 real 0m18.097s 00:22:41.422 user 0m33.952s 00:22:41.422 sys 0m4.764s 00:22:41.422 18:38:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.422 ************************************ 00:22:41.422 END TEST nvmf_digest_clean 00:22:41.422 ************************************ 00:22:41.422 18:38:48 -- common/autotest_common.sh@10 -- # set +x 00:22:41.422 18:38:48 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:41.422 18:38:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:41.422 18:38:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:41.422 18:38:48 -- common/autotest_common.sh@10 -- # set +x 00:22:41.422 ************************************ 00:22:41.422 START TEST nvmf_digest_error 00:22:41.422 ************************************ 00:22:41.422 18:38:48 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:41.422 18:38:48 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:41.422 18:38:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:41.422 18:38:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:41.422 18:38:48 -- common/autotest_common.sh@10 -- # set +x 00:22:41.422 18:38:48 -- nvmf/common.sh@469 -- # nvmfpid=97199 00:22:41.422 18:38:48 -- nvmf/common.sh@470 -- # waitforlisten 97199 00:22:41.422 18:38:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:41.422 18:38:48 -- common/autotest_common.sh@819 -- # '[' -z 97199 ']' 00:22:41.422 18:38:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.422 18:38:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.422 18:38:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.422 18:38:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.422 18:38:48 -- common/autotest_common.sh@10 -- # set +x 00:22:41.422 [2024-07-14 18:38:48.821373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:41.422 [2024-07-14 18:38:48.821463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.681 [2024-07-14 18:38:48.960088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.681 [2024-07-14 18:38:49.019040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:41.681 [2024-07-14 18:38:49.019193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.681 [2024-07-14 18:38:49.019206] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.681 [2024-07-14 18:38:49.019214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.681 [2024-07-14 18:38:49.019237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.616 18:38:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.616 18:38:49 -- common/autotest_common.sh@852 -- # return 0 00:22:42.616 18:38:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:42.616 18:38:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:42.617 18:38:49 -- common/autotest_common.sh@10 -- # set +x 00:22:42.617 18:38:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.617 18:38:49 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:42.617 18:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.617 18:38:49 -- common/autotest_common.sh@10 -- # set +x 00:22:42.617 [2024-07-14 18:38:49.831804] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:42.617 18:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.617 18:38:49 -- host/digest.sh@104 -- # common_target_config 00:22:42.617 18:38:49 -- host/digest.sh@43 -- # rpc_cmd 00:22:42.617 18:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.617 18:38:49 -- common/autotest_common.sh@10 -- # set +x 00:22:42.617 null0 00:22:42.617 [2024-07-14 18:38:49.934843] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.617 [2024-07-14 18:38:49.958970] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.617 18:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.617 18:38:49 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:42.617 18:38:49 -- host/digest.sh@54 -- # local rw bs qd 00:22:42.617 18:38:49 -- host/digest.sh@56 -- # rw=randread 00:22:42.617 18:38:49 -- host/digest.sh@56 -- # bs=4096 00:22:42.617 18:38:49 -- host/digest.sh@56 -- # qd=128 00:22:42.617 18:38:49 -- host/digest.sh@58 -- # bperfpid=97243 00:22:42.617 18:38:49 -- host/digest.sh@60 -- # waitforlisten 97243 /var/tmp/bperf.sock 00:22:42.617 18:38:49 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:42.617 18:38:49 -- common/autotest_common.sh@819 -- # '[' -z 97243 ']' 00:22:42.617 18:38:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.617 18:38:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.617 18:38:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.617 18:38:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.617 18:38:49 -- common/autotest_common.sh@10 -- # set +x 00:22:42.617 [2024-07-14 18:38:50.018656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:42.617 [2024-07-14 18:38:50.018736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97243 ] 00:22:42.876 [2024-07-14 18:38:50.162342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.876 [2024-07-14 18:38:50.230603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.812 18:38:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.812 18:38:50 -- common/autotest_common.sh@852 -- # return 0 00:22:43.812 18:38:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.812 18:38:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.812 18:38:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:43.812 18:38:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.812 18:38:51 -- common/autotest_common.sh@10 -- # set +x 00:22:43.812 18:38:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.812 18:38:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.812 18:38:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:44.071 nvme0n1 00:22:44.071 18:38:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:44.071 18:38:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.071 18:38:51 -- common/autotest_common.sh@10 -- # set +x 00:22:44.071 18:38:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.071 18:38:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:44.071 18:38:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:44.330 Running I/O for 2 seconds... 00:22:44.330 [2024-07-14 18:38:51.538677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.538738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.538767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.552252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.552304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.552333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.564898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.564949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.564978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.579336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.579390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.579419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.594236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.594271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.594299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.608354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.608388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.608416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.621346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.621380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.621408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.632426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.632460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.632489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.644999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.330 [2024-07-14 18:38:51.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.330 [2024-07-14 18:38:51.645061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.330 [2024-07-14 18:38:51.658819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.658855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.658883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.672037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.672071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.672098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.685895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.685946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.685975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.698394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.698427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.698455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.707238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.707273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.707302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.720381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.720414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.720442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.733369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.733403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.733430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.331 [2024-07-14 18:38:51.746840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.331 [2024-07-14 18:38:51.746873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.331 [2024-07-14 18:38:51.746901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.761123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.761157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.761185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.775032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.775066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.775094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.788148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.788195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.788206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.800129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.800175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.800187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.810294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.810340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.810351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.822712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.822757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.822768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.835064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.835111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.835122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.846479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.846533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.846545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.858780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.590 [2024-07-14 18:38:51.858826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.590 [2024-07-14 18:38:51.858838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.590 [2024-07-14 18:38:51.870261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.870307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.870318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.883963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.884011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.896690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.896735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.896747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.908319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.908365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.908376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.917800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.917846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.917857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.929684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.929731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.929743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.941656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.941702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.941715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.954543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.954598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.954609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.967219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.967266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.967277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.978550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.978595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.978606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.987328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.987374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.987385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:51.997782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:51.997828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:51.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.591 [2024-07-14 18:38:52.010052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.591 [2024-07-14 18:38:52.010115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.591 [2024-07-14 18:38:52.010141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.023268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.023314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.023326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.036077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.036123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.036134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.048114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.048161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.058351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.058398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.058409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.068434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.068482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.068493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.077935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.077980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.077992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.087964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.088025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.851 [2024-07-14 18:38:52.098232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.851 [2024-07-14 18:38:52.098279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.851 [2024-07-14 18:38:52.098291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.111052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.111098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.111109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.123509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.123554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.123590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.137391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.137454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.137466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.149686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.149734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.149745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.157971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.158029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.170903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.170950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.170961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.184226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.184274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.194908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.194955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.194967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.205011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.205058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.205069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.217204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.217259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.217272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.230524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.230582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.230594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.241764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.241815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.241828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.254322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.254382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.852 [2024-07-14 18:38:52.266783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:44.852 [2024-07-14 18:38:52.266832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.852 [2024-07-14 18:38:52.266845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.279715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.279751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.279765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.290224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.290273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.290285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.303923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.304002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.319815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.319865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.333224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.333272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.333284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.345598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.345646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.345658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.357671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.357718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.357729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.367631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.367678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.367690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.380571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.380622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.394366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.394417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.394431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.403830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.403879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.403928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.414322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.414371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.414383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.426762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.426811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.426824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.438409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.438472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.450004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.450052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.450064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.459600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.459651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.459665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.472412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.472459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.472471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.482303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.482352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.495017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.495065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.495076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.506103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.506149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.506160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.515807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.112 [2024-07-14 18:38:52.529620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.112 [2024-07-14 18:38:52.529667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.112 [2024-07-14 18:38:52.529687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.545263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.545315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.545328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.558030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.558081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.558094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.570259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.570311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.570324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.583317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.583367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.583380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.595954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.596004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.596016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.608409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.608456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.608467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.618957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.619007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.619019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.633849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.633885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.647047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.647091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.647105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.664118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.664167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.664179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.673955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.674004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.674016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.688592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.688652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.703457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.703515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.703529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.717995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.718045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.718072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.732985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.733034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.733063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.748340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.748389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.748401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.762579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.762629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.762642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.776182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.776232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.372 [2024-07-14 18:38:52.791308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.372 [2024-07-14 18:38:52.791346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.372 [2024-07-14 18:38:52.791370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.631 [2024-07-14 18:38:52.806582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.631 [2024-07-14 18:38:52.806631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.631 [2024-07-14 18:38:52.806647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.631 [2024-07-14 18:38:52.820427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.631 [2024-07-14 18:38:52.820477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.631 [2024-07-14 18:38:52.820489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.631 [2024-07-14 18:38:52.834198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.631 [2024-07-14 18:38:52.834248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.631 [2024-07-14 18:38:52.834261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.631 [2024-07-14 18:38:52.848712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.631 [2024-07-14 18:38:52.848764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.863605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.863644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.863658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.876923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.876980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.891859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.891897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.891911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.903970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.904012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.914828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.914877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.914889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.930408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.930458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.930470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.942773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.942823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.942835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.957697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.957742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.957755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.971185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.971247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.971276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.982439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.982490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.982515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:52.995250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:52.995301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:52.995314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:53.006293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:53.006346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:53.006361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:53.017805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:53.017841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:53.017854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:53.029886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:53.029923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:53.029936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.632 [2024-07-14 18:38:53.043446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.632 [2024-07-14 18:38:53.043484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.632 [2024-07-14 18:38:53.043513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.056605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.056710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.067704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.067741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.067754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.080779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.080812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.080824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.093822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.093888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.093900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.104810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.104859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.104886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.115983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.116033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.116046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.130022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.130059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.130072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.891 [2024-07-14 18:38:53.146065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.891 [2024-07-14 18:38:53.146109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.891 [2024-07-14 18:38:53.146142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.162389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.162434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.162452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.175420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.175458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.175472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.189755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.189821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.189834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.202781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.202817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.202831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.216910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.216953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.216970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.230620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.230679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.230691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.243550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.243622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.243635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.252759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.252812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.252833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.266552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.266599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.266611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.280445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.280493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.280516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.293301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.293362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.892 [2024-07-14 18:38:53.304430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:45.892 [2024-07-14 18:38:53.304479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.892 [2024-07-14 18:38:53.304491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.318126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.318194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.318221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.331265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.331312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.331324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.343214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.343262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.343273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.352797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.352845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.352856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.364173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.364221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.364232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.374991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.375037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.375048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.384561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.384618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.384630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.396611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.396658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.396669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.409818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.409864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.423730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.423780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.423792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.433668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.433716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.433728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.445462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.445538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.445552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.456792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.456842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.456870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.467773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.467825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.467838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.480800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.480849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.480875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.494108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.494156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.494167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.508026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.508075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.508088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 [2024-07-14 18:38:53.523471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f6580) 00:22:46.151 [2024-07-14 18:38:53.523531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.151 [2024-07-14 18:38:53.523544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.151 00:22:46.151 Latency(us) 00:22:46.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:46.151 nvme0n1 : 2.01 20255.54 79.12 0.00 0.00 6312.55 2591.65 20852.36 00:22:46.151 =================================================================================================================== 00:22:46.151 Total : 20255.54 79.12 0.00 0.00 6312.55 2591.65 20852.36 00:22:46.151 0 00:22:46.151 18:38:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:46.151 18:38:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:46.151 18:38:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:46.151 18:38:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:46.151 | .driver_specific 00:22:46.151 | .nvme_error 00:22:46.151 | .status_code 00:22:46.151 | .command_transient_transport_error' 00:22:46.409 18:38:53 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:22:46.409 18:38:53 -- host/digest.sh@73 -- # killprocess 97243 00:22:46.409 18:38:53 -- common/autotest_common.sh@926 -- # '[' -z 97243 ']' 00:22:46.409 18:38:53 -- common/autotest_common.sh@930 -- # kill -0 97243 00:22:46.409 18:38:53 -- common/autotest_common.sh@931 -- # uname 00:22:46.667 18:38:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.667 18:38:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97243 00:22:46.667 killing process with pid 97243 00:22:46.667 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.667 00:22:46.667 Latency(us) 00:22:46.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.667 =================================================================================================================== 00:22:46.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.667 18:38:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:46.667 18:38:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:46.667 18:38:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97243' 00:22:46.667 18:38:53 -- common/autotest_common.sh@945 -- # kill 97243 00:22:46.667 18:38:53 -- common/autotest_common.sh@950 -- # wait 97243 00:22:46.667 18:38:54 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:46.667 18:38:54 -- host/digest.sh@54 -- # local rw bs qd 00:22:46.667 18:38:54 -- host/digest.sh@56 -- # rw=randread 00:22:46.667 18:38:54 -- host/digest.sh@56 -- # bs=131072 00:22:46.667 18:38:54 -- host/digest.sh@56 -- # qd=16 00:22:46.667 18:38:54 -- host/digest.sh@58 -- # bperfpid=97332 00:22:46.667 18:38:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:46.667 18:38:54 -- host/digest.sh@60 -- # waitforlisten 97332 /var/tmp/bperf.sock 00:22:46.667 18:38:54 -- common/autotest_common.sh@819 -- # '[' -z 97332 ']' 00:22:46.667 18:38:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:46.667 18:38:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:46.667 18:38:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:46.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:46.667 18:38:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:46.667 18:38:54 -- common/autotest_common.sh@10 -- # set +x 00:22:46.926 [2024-07-14 18:38:54.118891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:46.926 [2024-07-14 18:38:54.119142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97332 ] 00:22:46.926 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:46.926 Zero copy mechanism will not be used. 00:22:46.926 [2024-07-14 18:38:54.256019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.926 [2024-07-14 18:38:54.330421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.890 18:38:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:47.890 18:38:55 -- common/autotest_common.sh@852 -- # return 0 00:22:47.890 18:38:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:47.890 18:38:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:48.149 18:38:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:48.149 18:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.149 18:38:55 -- common/autotest_common.sh@10 -- # set +x 00:22:48.149 18:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.149 18:38:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:48.149 18:38:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:48.408 nvme0n1 00:22:48.408 18:38:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:48.408 18:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.408 18:38:55 -- common/autotest_common.sh@10 -- # set +x 00:22:48.408 18:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.408 18:38:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:48.408 18:38:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:48.408 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.408 Zero copy mechanism will not be used. 00:22:48.408 Running I/O for 2 seconds... 00:22:48.408 [2024-07-14 18:38:55.809223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.408 [2024-07-14 18:38:55.809299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.408 [2024-07-14 18:38:55.809314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.408 [2024-07-14 18:38:55.812805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.408 [2024-07-14 18:38:55.812873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.408 [2024-07-14 18:38:55.812902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.408 [2024-07-14 18:38:55.816830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.408 [2024-07-14 18:38:55.816897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.408 [2024-07-14 18:38:55.816926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.408 [2024-07-14 18:38:55.820572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.408 [2024-07-14 18:38:55.820623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.409 [2024-07-14 18:38:55.820651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.409 [2024-07-14 18:38:55.823978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.409 [2024-07-14 18:38:55.824044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.409 [2024-07-14 18:38:55.824073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.409 [2024-07-14 18:38:55.827600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.409 [2024-07-14 18:38:55.827654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.409 [2024-07-14 18:38:55.827684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.409 [2024-07-14 18:38:55.831718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.409 [2024-07-14 18:38:55.831772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.409 [2024-07-14 18:38:55.831802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.835907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.835991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.839804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.839843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.839872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.843823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.843863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.843893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.847628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.847666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.847695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.851464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.851526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.851557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.854934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.854969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.854997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.858651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.858686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.861618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.861668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.861696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.864749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.864799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.864828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.868461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.868523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.868553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.871165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.874719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.670 [2024-07-14 18:38:55.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.670 [2024-07-14 18:38:55.874779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.670 [2024-07-14 18:38:55.878635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.878669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.878698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.882459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.882534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.882563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.886136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.886186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.886215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.889730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.889779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.889808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.892641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.892691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.892719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.896199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.896249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.896278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.899832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.899883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.899911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.903006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.903056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.903083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.906021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.906055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.906083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.909365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.909400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.909428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.912765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.912815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.912842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.915976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.916026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.916068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.919332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.919380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.919407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.921923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.921973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.922000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.925437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.925515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.925546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.929017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.929067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.929095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.932161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.932212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.935881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.935949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.935977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.939052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.939102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.939130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.942904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.942956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.942985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.946640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.946694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.946707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.950601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.950650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.950677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.953927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.953965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.953977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.957394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.957444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.957472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.960573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.960635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.960663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.963721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.963773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.963802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.966996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.967047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.967074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.970463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.970522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.970551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.973348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.671 [2024-07-14 18:38:55.973398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.671 [2024-07-14 18:38:55.973425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.671 [2024-07-14 18:38:55.976915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.976967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.976995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.979643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.979681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.979694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.983110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.983161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.983189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.987060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.987111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.987138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.990524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.990566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.990596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.993958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.994008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.994035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:55.997677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:55.997728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:55.997757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.001529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.001606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.004842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.004876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.004905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.008204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.008253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.008282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.011597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.014391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.014442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.014471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.017203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.017238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.017266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.020585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.020633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.025015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.025093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.028051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.028101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.028130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.031024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.031058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.031086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.034062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.034111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.034138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.037596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.037645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.041141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.041191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.041220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.044868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.044918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.044946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.048126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.048176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.048204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.051383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.051420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.051448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.055530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.055588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.055618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.059186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.059236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.059264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.062767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.062817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.062845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.066426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.066460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.066488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.069832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.069883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.072803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.072851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.672 [2024-07-14 18:38:56.072879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.672 [2024-07-14 18:38:56.075715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.672 [2024-07-14 18:38:56.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.673 [2024-07-14 18:38:56.075797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.673 [2024-07-14 18:38:56.079222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.673 [2024-07-14 18:38:56.079272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.673 [2024-07-14 18:38:56.079299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.673 [2024-07-14 18:38:56.082430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.673 [2024-07-14 18:38:56.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.673 [2024-07-14 18:38:56.082519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.673 [2024-07-14 18:38:56.085805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.673 [2024-07-14 18:38:56.085856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.673 [2024-07-14 18:38:56.085898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.673 [2024-07-14 18:38:56.089110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.673 [2024-07-14 18:38:56.089179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.673 [2024-07-14 18:38:56.089191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.934 [2024-07-14 18:38:56.092952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.934 [2024-07-14 18:38:56.093003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.934 [2024-07-14 18:38:56.093032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.934 [2024-07-14 18:38:56.096762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.934 [2024-07-14 18:38:56.096814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.934 [2024-07-14 18:38:56.096827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.934 [2024-07-14 18:38:56.099475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.934 [2024-07-14 18:38:56.099576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.934 [2024-07-14 18:38:56.099608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.103097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.103148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.103176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.106278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.106329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.106358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.109810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.109861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.109889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.112998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.116387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.116437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.116465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.120002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.120068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.120095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.123044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.123094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.123122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.126777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.126811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.126839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.130008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.130085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.133671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.133722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.133749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.136616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.136665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.136693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.140307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.140358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.140387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.143650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.143690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.143704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.147153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.147189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.147217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.150536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.150581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.150610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.154273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.154309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.157770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.157821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.157849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.161362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.161440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.164794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.164844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.164872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.168189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.168239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.168267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.171741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.171779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.174545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.174593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.174621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.178223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.178273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.178301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.182195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.182245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.182273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.185818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.185868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.185896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.188816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.188866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.188895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.192414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.192465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.192492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.196302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.196353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.196382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.935 [2024-07-14 18:38:56.200350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.935 [2024-07-14 18:38:56.200401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.935 [2024-07-14 18:38:56.200429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.204326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.204376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.204404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.207252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.207302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.207330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.211076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.211155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.214935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.214985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.215012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.218230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.218281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.218309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.222132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.222183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.225871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.225921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.225949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.229611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.229662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.229690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.232866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.232916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.236054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.236106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.236135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.239436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.239486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.239542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.242893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.242944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.245981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.246030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.246058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.249139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.249192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.249220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.252877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.252926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.252954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.256507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.256583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.260208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.260261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.260274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.263694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.263732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.263745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.266984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.267034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.267062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.270548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.270598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.270626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.273733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.273783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.273811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.277385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.277435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.280952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.281002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.281031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.283778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.283830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.283843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.287242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.287292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.287320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.290697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.290747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.290774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.294357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.294406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.297380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.297430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.297458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.300957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.936 [2024-07-14 18:38:56.301007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.936 [2024-07-14 18:38:56.301035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.936 [2024-07-14 18:38:56.304185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.304235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.304263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.307384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.307432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.307460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.310961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.311010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.311037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.314074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.314123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.314152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.317882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.317931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.317959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.321673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.321724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.325003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.325054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.325082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.328209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.328260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.328288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.331245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.331296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.331323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.334445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.334517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.334531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.338205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.338253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.338281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.341831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.341881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.341924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.345909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.345959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.349267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.349316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.349344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.937 [2024-07-14 18:38:56.352638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:48.937 [2024-07-14 18:38:56.352690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.937 [2024-07-14 18:38:56.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.357179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.357277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.360853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.360921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.360934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.364380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.364463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.364476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.368471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.368552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.368567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.372021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.372071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.372099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.375865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.375940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.375968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.379268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.379320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.379347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.382741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.386657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.386691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.386703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.389912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.389963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.389991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.198 [2024-07-14 18:38:56.393751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.198 [2024-07-14 18:38:56.393803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.198 [2024-07-14 18:38:56.393815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.397498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.397558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.397586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.400998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.401048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.401076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.404368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.404445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.407797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.407834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.407848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.411346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.411394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.411421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.414685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.414733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.414761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.417901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.417950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.417978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.421424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.421473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.421512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.424256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.424332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.427757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.427810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.427839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.430464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.430523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.430553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.434354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.434406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.434434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.438244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.438309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.441602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.441639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.441651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.445553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.445587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.445600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.449787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.449824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.453665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.453703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.453715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.457123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.457186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.460685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.460723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.460736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.464092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.464143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.464155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.468176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.468226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.468239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.472214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.472265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.472278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.475432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.475482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.479119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.479153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.479165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.482656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.482706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.482719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.486667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.486721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.490301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.490354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.490367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.494195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.494247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.199 [2024-07-14 18:38:56.494259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.199 [2024-07-14 18:38:56.497562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.199 [2024-07-14 18:38:56.497614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.497626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.501556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.501606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.501619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.505435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.505502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.505517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.509251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.509302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.509314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.512932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.512983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.516774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.516826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.516839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.520602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.520652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.524941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.524994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.525007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.528448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.528507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.528520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.532270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.532323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.532335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.536430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.536479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.536503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.539977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.540026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.540053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.543423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.543475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.546829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.546879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.546906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.549829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.549879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.549891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.553170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.553223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.556871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.556923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.556935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.560099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.560149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.560162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.563937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.563990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.567311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.567363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.567375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.570932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.570984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.574317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.574369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.574382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.577663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.577713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.577725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.581572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.581633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.581645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.585631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.585681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.585693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.589208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.589259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.589271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.592491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.592552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.592564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.596010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.596079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.596091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.599423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.599474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.200 [2024-07-14 18:38:56.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.200 [2024-07-14 18:38:56.603267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.200 [2024-07-14 18:38:56.603319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.603331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.201 [2024-07-14 18:38:56.606824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.201 [2024-07-14 18:38:56.606876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.201 [2024-07-14 18:38:56.610216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.201 [2024-07-14 18:38:56.610267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.610280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.201 [2024-07-14 18:38:56.613245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.201 [2024-07-14 18:38:56.613296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.613308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.201 [2024-07-14 18:38:56.617031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.201 [2024-07-14 18:38:56.617085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.617098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.201 [2024-07-14 18:38:56.620596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.201 [2024-07-14 18:38:56.620645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.201 [2024-07-14 18:38:56.620658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.624979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.625033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.625046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.628294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.628347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.632037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.632092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.632105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.635110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.635161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.635173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.638701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.638753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.638766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.641949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.641992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.642006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.644989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.645055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.648502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.648564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.648577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.651986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.652053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.652065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.655019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.655070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.655082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.658677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.658729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.658742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.662709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.662762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.666055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.666106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.666136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.669443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.669519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.669533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.673258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.673308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.673338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.676504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.676566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.676596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.679720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.679758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.679788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.683075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.683125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.683155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.686799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.686849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.686879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.690108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.690159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.693880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.693932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.693962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.697506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.697555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.461 [2024-07-14 18:38:56.697585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.461 [2024-07-14 18:38:56.700742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.461 [2024-07-14 18:38:56.700777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.700805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.703827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.703879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.706836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.706885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.706913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.710427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.710478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.710517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.713835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.713913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.717257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.717307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.717335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.720393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.720441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.720469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.724147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.724197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.724225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.727377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.727428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.727456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.730645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.730695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.730723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.734273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.734321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.734350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.737441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.737516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.737529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.740796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.744211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.744261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.744289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.747692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.747729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.747741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.750586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.750635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.750663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.754014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.754064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.754093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.757536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.757586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.760804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.760854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.760883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.764162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.764239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.767351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.767401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.767429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.770399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.770450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.774106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.774156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.774185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.777728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.777779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.777807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.781932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.781983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.782011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.785836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.785886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.785914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.789312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.789361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.789388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.792254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.792306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.795480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.795555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.795626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.462 [2024-07-14 18:38:56.799536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.462 [2024-07-14 18:38:56.799638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.462 [2024-07-14 18:38:56.799652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.802383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.802434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.802462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.806148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.806198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.806226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.809357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.809409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.809437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.812653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.812704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.812716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.815870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.815938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.815966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.819116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.819166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.819195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.822663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.822713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.822741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.826043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.826093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.826120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.829127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.829175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.829203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.832563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.832611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.832638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.835732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.835784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.835813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.839262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.839312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.839339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.842329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.842379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.842407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.845762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.845812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.845840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.848937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.849015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.852249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.852298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.852327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.855866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.855936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.855948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.859280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.859328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.859356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.862375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.862424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.862451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.866012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.866061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.866089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.869069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.869161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.871840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.871905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.874751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.874801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.874829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.877851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.877901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.463 [2024-07-14 18:38:56.881714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.463 [2024-07-14 18:38:56.881769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.463 [2024-07-14 18:38:56.881782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.885120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.885192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.885205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.889040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.889096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.889111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.893399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.893453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.893482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.896601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.896651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.900088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.900145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.900174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.903162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.903213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.903241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.906746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.906797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.906825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.910297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.910347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.910375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.914154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.914205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.914234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.917697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.917777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.920968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.921019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.921047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.924124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.924174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.924202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.927339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.927389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.927417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.930320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.930397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.724 [2024-07-14 18:38:56.933827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.724 [2024-07-14 18:38:56.933877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-14 18:38:56.933904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.937084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.937161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.940562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.940624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.940653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.943825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.943864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.943893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.946870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.946919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.946946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.950375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.950424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.950452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.953692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.953741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.957009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.957057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.957085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.960095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.960142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.960170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.963467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.963527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.963555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.967320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.967370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.967398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.971158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.971210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.971239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.974440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.974517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.974530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.977962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.978010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.978038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.981114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.981163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.981191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.984167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.984216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.984245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.987288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.987338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.987366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.990371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.990421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.990449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.994082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.994132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.994161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:56.997173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:56.997222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:56.997250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.001146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.001194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.001222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.003976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.004027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.004039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.007433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.007483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.010584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.010634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.010661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.013823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.013875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.013918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.017082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.017133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.017162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.020615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.020692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.023752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.023790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.023819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.027242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.027292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.027320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.725 [2024-07-14 18:38:57.030339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.725 [2024-07-14 18:38:57.030389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-14 18:38:57.030417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.033935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.033985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.037270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.037319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.037348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.040487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.040545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.040574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.044087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.047260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.047310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.047337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.050751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.050800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.054215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.054263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.054290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.057728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.057777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.057806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.061219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.061268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.061296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.065135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.065189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.065218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.069861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.069975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.070010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.073421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.073471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.073515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.076876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.076925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.076953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.079433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.079484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.079523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.082353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.082402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.082431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.086593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.086643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.086672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.089536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.089586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.089614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.093213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.093293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.096631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.096682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.096710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.100624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.100674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.100702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.104201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.104282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.108169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.108220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.108249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.111769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.111852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.115377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.115427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.115455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.119295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.119344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.119372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.123046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.123097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.123126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.126679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.126726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.126754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.130661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.130711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.130740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.726 [2024-07-14 18:38:57.133235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.726 [2024-07-14 18:38:57.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.726 [2024-07-14 18:38:57.133312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.727 [2024-07-14 18:38:57.136431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.727 [2024-07-14 18:38:57.136480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.727 [2024-07-14 18:38:57.136519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.727 [2024-07-14 18:38:57.140197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.727 [2024-07-14 18:38:57.140247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.727 [2024-07-14 18:38:57.140276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.727 [2024-07-14 18:38:57.144002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.727 [2024-07-14 18:38:57.144076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.727 [2024-07-14 18:38:57.144097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.148339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.148411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.148435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.152501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.152583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.152598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.156653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.156735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.160095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.160173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.163998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.164065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.164093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.167448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.167524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.167538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.170932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.170983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.171011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.174296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.174347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.174376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.178032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.178083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.178112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.181166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.181219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.181248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.184400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.184450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.187460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.187521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.187550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.191746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.191785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.191799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.195295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.195345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.195373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.198989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.199038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.199065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.201737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.201788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.201817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.205353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.205404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.205432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.209219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.209270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.209299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.213055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.213106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.213135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.217300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.217351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.221068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.221120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.221148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.224651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.224699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.224727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.227513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.227561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.227629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.230908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.230959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.230987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.233381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.233431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.233460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.237415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.237467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.987 [2024-07-14 18:38:57.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.987 [2024-07-14 18:38:57.241283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.987 [2024-07-14 18:38:57.241334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.241362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.244860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.244911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.244938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.248162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.248213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.248241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.251493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.251553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.251606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.255063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.255112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.255140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.258803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.258853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.258881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.262985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.263036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.263064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.266450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.266542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.266557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.269991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.270041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.270070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.273417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.273475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.273529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.276891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.276941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.276969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.280432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.280555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.284045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.284121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.287298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.287348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.287376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.291149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.291200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.291228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.294802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.294880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.297721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.297770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.297798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.301318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.301383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.304615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.304666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.304694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.308353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.308404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.308432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.311594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.311646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.315075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.315125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.315153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.318017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.318069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.318098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.321376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.321427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.321455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.325354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.325405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.325434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.329210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.329261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.329289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.332504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.332565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.332593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.335923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.336003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.339381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.339432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-14 18:38:57.339460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.988 [2024-07-14 18:38:57.342450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.988 [2024-07-14 18:38:57.342540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.346069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.346119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.346163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.349776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.349826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.349855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.353404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.353458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.353486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.356770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.356819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.356847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.359859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.359909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.359941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.362809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.362859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.362887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.366397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.366447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.366475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.370510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.370574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.370587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.374160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.374196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.374226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.377625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.377677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.377690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.381972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.382037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.385735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.385774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.385788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.389721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.389777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.393565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.393617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.393631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.398111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.398180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.398210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.401932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.401983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.402012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.989 [2024-07-14 18:38:57.405791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:49.989 [2024-07-14 18:38:57.405887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-14 18:38:57.405938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.409962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.410063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.410083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.413682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.413736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.413765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.417883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.417952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.417981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.421561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.421611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.421639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.425380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.425431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.425461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.428719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.428769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.428798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.432248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.432299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.432327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.435636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.435690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.439224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.439276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.439306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.443014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.443092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.446249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.446300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.446328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.449458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.449548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.449562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.453188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.453272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.453306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.457914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.457967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.457996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.461696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.249 [2024-07-14 18:38:57.461751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.249 [2024-07-14 18:38:57.461779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.249 [2024-07-14 18:38:57.465385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.465439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.465468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.468534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.468593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.468622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.472371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.472423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.472467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.475490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.475548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.475601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.478956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.479005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.479033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.482471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.482552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.482581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.486481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.486527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.486557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.489553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.489603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.489631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.493405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.493456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.496792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.496843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.496872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.500347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.500398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.500426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.503413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.503462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.506863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.506913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.506941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.510548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.510596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.510625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.514115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.514180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.514208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.518221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.518271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.518299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.521715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.521765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.524166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.524214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.527721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.527771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.527800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.531300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.531348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.531376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.534729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.534778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.534806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.538191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.538251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.538279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.541927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.541977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.542005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.545326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.545378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.545407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.548669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.548720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.548748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.552258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.552337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.555431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.555482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.555533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.558817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.558865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.558894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.562234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.562284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.250 [2024-07-14 18:38:57.562313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.250 [2024-07-14 18:38:57.566147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.250 [2024-07-14 18:38:57.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.566226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.569388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.569438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.569467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.572948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.572996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.573024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.576259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.576312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.576340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.579714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.579768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.579797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.582802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.582851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.582878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.586215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.586265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.586293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.589213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.589265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.593205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.593263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.593292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.597157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.597240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.600395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.600446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.603763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.603847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.607012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.607089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.610241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.610328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.613650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.613700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.613727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.616368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.616417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.616446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.620366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.620416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.620445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.624356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.624407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.624435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.628043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.628106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.628134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.631441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.631517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.631531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.634705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.634754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.634782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.638567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.638617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.638645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.642134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.642184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.642212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.645219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.645269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.645296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.648637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.648686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.648714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.652586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.652634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.652662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.656544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.656604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.656632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.659988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.660023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.660035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.251 [2024-07-14 18:38:57.663769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.251 [2024-07-14 18:38:57.663807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.251 [2024-07-14 18:38:57.663820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.252 [2024-07-14 18:38:57.667243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.252 [2024-07-14 18:38:57.667278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.252 [2024-07-14 18:38:57.667305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.252 [2024-07-14 18:38:57.671103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.252 [2024-07-14 18:38:57.671141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.252 [2024-07-14 18:38:57.671169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.674086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.674143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.674158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.677821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.677889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.677918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.682146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.682202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.682215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.685582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.685636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.685650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.689189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.689242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.689255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.693130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.693196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.696721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.696774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.696787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.700347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.700398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.700410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.703717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.703756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.706874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.706911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.706924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.710420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.710472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.710485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.714580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.714636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.714649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.718073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.718158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.721157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.721211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.721223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.724349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.724400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.724412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.728330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.728382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.728395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.732723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.732774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.732786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.735908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.735975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.735988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.738874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.738936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.742583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.742633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.742646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.745576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.745628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.512 [2024-07-14 18:38:57.745641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.512 [2024-07-14 18:38:57.749376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.512 [2024-07-14 18:38:57.749428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.749441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.752837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.752887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.752900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.756137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.756187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.756199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.759403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.759453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.759465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.762931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.762981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.762992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.766734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.766786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.766798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.770808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.770860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.770872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.774388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.774439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.774452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.777771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.777824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.777837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.781283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.781335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.781348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.784481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.784549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.784563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.788006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.788074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.788086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.791046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.791096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.791108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.795112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.795179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.795191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.513 [2024-07-14 18:38:57.798717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a6f0) 00:22:50.513 [2024-07-14 18:38:57.798769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.513 [2024-07-14 18:38:57.798782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.513 00:22:50.513 Latency(us) 00:22:50.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.513 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:50.513 nvme0n1 : 2.00 8826.98 1103.37 0.00 0.00 1809.49 577.16 6970.65 00:22:50.513 =================================================================================================================== 00:22:50.513 Total : 8826.98 1103.37 0.00 0.00 1809.49 577.16 6970.65 00:22:50.513 0 00:22:50.513 18:38:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:50.513 18:38:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:50.513 18:38:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:50.513 18:38:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:50.513 | .driver_specific 00:22:50.513 | .nvme_error 00:22:50.513 | .status_code 00:22:50.513 | .command_transient_transport_error' 00:22:50.772 18:38:58 -- host/digest.sh@71 -- # (( 569 > 0 )) 00:22:50.772 18:38:58 -- host/digest.sh@73 -- # killprocess 97332 00:22:50.772 18:38:58 -- common/autotest_common.sh@926 -- # '[' -z 97332 ']' 00:22:50.772 18:38:58 -- common/autotest_common.sh@930 -- # kill -0 97332 00:22:50.772 18:38:58 -- common/autotest_common.sh@931 -- # uname 00:22:50.772 18:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:50.772 18:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97332 00:22:50.772 18:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:50.772 18:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:50.772 18:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97332' 00:22:50.772 killing process with pid 97332 00:22:50.772 18:38:58 -- common/autotest_common.sh@945 -- # kill 97332 00:22:50.772 Received shutdown signal, test time was about 2.000000 seconds 00:22:50.772 00:22:50.772 Latency(us) 00:22:50.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.772 =================================================================================================================== 00:22:50.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.772 18:38:58 -- common/autotest_common.sh@950 -- # wait 97332 00:22:51.031 18:38:58 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:51.031 18:38:58 -- host/digest.sh@54 -- # local rw bs qd 00:22:51.031 18:38:58 -- host/digest.sh@56 -- # rw=randwrite 00:22:51.031 18:38:58 -- host/digest.sh@56 -- # bs=4096 00:22:51.031 18:38:58 -- host/digest.sh@56 -- # qd=128 00:22:51.031 18:38:58 -- host/digest.sh@58 -- # bperfpid=97418 00:22:51.031 18:38:58 -- host/digest.sh@60 -- # waitforlisten 97418 /var/tmp/bperf.sock 00:22:51.031 18:38:58 -- common/autotest_common.sh@819 -- # '[' -z 97418 ']' 00:22:51.031 18:38:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:51.031 18:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:51.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:51.031 18:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:51.031 18:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:51.031 18:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:51.031 18:38:58 -- common/autotest_common.sh@10 -- # set +x 00:22:51.031 [2024-07-14 18:38:58.370567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:51.031 [2024-07-14 18:38:58.370669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97418 ] 00:22:51.289 [2024-07-14 18:38:58.510494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.289 [2024-07-14 18:38:58.582308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.223 18:38:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.223 18:38:59 -- common/autotest_common.sh@852 -- # return 0 00:22:52.223 18:38:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.223 18:38:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.223 18:38:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:52.223 18:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.223 18:38:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.223 18:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.223 18:38:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.223 18:38:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.789 nvme0n1 00:22:52.789 18:38:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:52.789 18:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.789 18:38:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.789 18:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.789 18:38:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:52.789 18:38:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:52.789 Running I/O for 2 seconds... 00:22:52.789 [2024-07-14 18:39:00.034638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eea00 00:22:52.789 [2024-07-14 18:39:00.035983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.789 [2024-07-14 18:39:00.036054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.789 [2024-07-14 18:39:00.045610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb480 00:22:52.789 [2024-07-14 18:39:00.046838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.789 [2024-07-14 18:39:00.046886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.789 [2024-07-14 18:39:00.055017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f2510 00:22:52.789 [2024-07-14 18:39:00.055839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.055873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.065142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fa7d8 00:22:52.790 [2024-07-14 18:39:00.065926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.065958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.075591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f31b8 00:22:52.790 [2024-07-14 18:39:00.076753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.086598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f9b30 00:22:52.790 [2024-07-14 18:39:00.087523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.087619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.098955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f3e60 00:22:52.790 [2024-07-14 18:39:00.099818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.099861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.109545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8e88 00:22:52.790 [2024-07-14 18:39:00.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.110327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.119336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4b08 00:22:52.790 [2024-07-14 18:39:00.120275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.120308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.128711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7da8 00:22:52.790 [2024-07-14 18:39:00.129132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.129165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.140720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190de8a8 00:22:52.790 [2024-07-14 18:39:00.141752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.141782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.147855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ed0b0 00:22:52.790 [2024-07-14 18:39:00.148058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.148089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.160024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e88f8 00:22:52.790 [2024-07-14 18:39:00.161601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.161633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.171627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eea00 00:22:52.790 [2024-07-14 18:39:00.172901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.172931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.178576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8a50 00:22:52.790 [2024-07-14 18:39:00.179543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.179624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.190401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1f80 00:22:52.790 [2024-07-14 18:39:00.191359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.191389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.199276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6b70 00:22:52.790 [2024-07-14 18:39:00.200336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.200367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.790 [2024-07-14 18:39:00.209096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaef0 00:22:52.790 [2024-07-14 18:39:00.209617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.790 [2024-07-14 18:39:00.209642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.220183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1710 00:22:53.049 [2024-07-14 18:39:00.221292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.230138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e7c50 00:22:53.049 [2024-07-14 18:39:00.231255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.231290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.240030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1b48 00:22:53.049 [2024-07-14 18:39:00.241158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.241192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.251132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1710 00:22:53.049 [2024-07-14 18:39:00.251889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.251919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.262853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190edd58 00:22:53.049 [2024-07-14 18:39:00.264131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.264164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.270352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0a68 00:22:53.049 [2024-07-14 18:39:00.270696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.270740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.281530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaef0 00:22:53.049 [2024-07-14 18:39:00.282306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.282355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.290266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e2c28 00:22:53.049 [2024-07-14 18:39:00.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.291183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.301216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0a68 00:22:53.049 [2024-07-14 18:39:00.302722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.302933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.316244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190dfdc0 00:22:53.049 [2024-07-14 18:39:00.317843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.318029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.324636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1f80 00:22:53.049 [2024-07-14 18:39:00.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.325359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.337036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fda78 00:22:53.049 [2024-07-14 18:39:00.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.338086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.344338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f0ff8 00:22:53.049 [2024-07-14 18:39:00.344467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.344485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.357557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eea00 00:22:53.049 [2024-07-14 18:39:00.358307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.358343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.367778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e49b0 00:22:53.049 [2024-07-14 18:39:00.368329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.368365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.376300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ef6a8 00:22:53.049 [2024-07-14 18:39:00.376476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.376495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.388231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f5be8 00:22:53.049 [2024-07-14 18:39:00.388929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.388965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.398238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e01f8 00:22:53.049 [2024-07-14 18:39:00.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.399135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.407906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190de470 00:22:53.049 [2024-07-14 18:39:00.408591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.408638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.416302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1430 00:22:53.049 [2024-07-14 18:39:00.416676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.416705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.428144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e4de8 00:22:53.049 [2024-07-14 18:39:00.429220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.435927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190de470 00:22:53.049 [2024-07-14 18:39:00.436062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.449602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6fa8 00:22:53.049 [2024-07-14 18:39:00.451277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.049 [2024-07-14 18:39:00.451311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:53.049 [2024-07-14 18:39:00.460153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fac10 00:22:53.050 [2024-07-14 18:39:00.460795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.050 [2024-07-14 18:39:00.460846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:53.050 [2024-07-14 18:39:00.470305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0630 00:22:53.050 [2024-07-14 18:39:00.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.050 [2024-07-14 18:39:00.471550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:53.308 [2024-07-14 18:39:00.480471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fc998 00:22:53.308 [2024-07-14 18:39:00.481320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.308 [2024-07-14 18:39:00.481358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:53.308 [2024-07-14 18:39:00.490321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb480 00:22:53.308 [2024-07-14 18:39:00.491093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.308 [2024-07-14 18:39:00.491130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:53.308 [2024-07-14 18:39:00.502148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eff18 00:22:53.308 [2024-07-14 18:39:00.502875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.502910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.511309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f96f8 00:22:53.309 [2024-07-14 18:39:00.512561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.521506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ebfd0 00:22:53.309 [2024-07-14 18:39:00.521954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.522010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.533008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7100 00:22:53.309 [2024-07-14 18:39:00.534119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.534150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.541312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df988 00:22:53.309 [2024-07-14 18:39:00.542574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.542631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.551154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f57b0 00:22:53.309 [2024-07-14 18:39:00.551920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.551969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.560123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1ca0 00:22:53.309 [2024-07-14 18:39:00.561271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.569782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e2c28 00:22:53.309 [2024-07-14 18:39:00.570205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.570237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.581213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ef6a8 00:22:53.309 [2024-07-14 18:39:00.582275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.582304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.588322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb8b8 00:22:53.309 [2024-07-14 18:39:00.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.588495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.599834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ec408 00:22:53.309 [2024-07-14 18:39:00.600687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.600720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.609810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7100 00:22:53.309 [2024-07-14 18:39:00.611116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.611151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.619436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ec840 00:22:53.309 [2024-07-14 18:39:00.620119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.620149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.629413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fa3a0 00:22:53.309 [2024-07-14 18:39:00.630074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.630104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.638352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df988 00:22:53.309 [2024-07-14 18:39:00.639319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.647234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ee190 00:22:53.309 [2024-07-14 18:39:00.647979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.648016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.658504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e49b0 00:22:53.309 [2024-07-14 18:39:00.659231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.659278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.666790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8618 00:22:53.309 [2024-07-14 18:39:00.667649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.667682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.676087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1430 00:22:53.309 [2024-07-14 18:39:00.677281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.677313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.687409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4f40 00:22:53.309 [2024-07-14 18:39:00.688438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.688470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.695914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f9b30 00:22:53.309 [2024-07-14 18:39:00.696719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.696781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.705316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f57b0 00:22:53.309 [2024-07-14 18:39:00.706048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.706096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.714801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f46d0 00:22:53.309 [2024-07-14 18:39:00.715446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.715481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:53.309 [2024-07-14 18:39:00.724179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5ec8 00:22:53.309 [2024-07-14 18:39:00.724832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.309 [2024-07-14 18:39:00.724865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.734303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e3060 00:22:53.568 [2024-07-14 18:39:00.735138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.735187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.744382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fa3a0 00:22:53.568 [2024-07-14 18:39:00.745175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.745230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.754081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eee38 00:22:53.568 [2024-07-14 18:39:00.754763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.754798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.764394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190dece0 00:22:53.568 [2024-07-14 18:39:00.765131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.765160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.773071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df118 00:22:53.568 [2024-07-14 18:39:00.774295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.774327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.782127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e88f8 00:22:53.568 [2024-07-14 18:39:00.782346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.782369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.793370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5658 00:22:53.568 [2024-07-14 18:39:00.794462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.794522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.804300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ddc00 00:22:53.568 [2024-07-14 18:39:00.806179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.806212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.815092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e4578 00:22:53.568 [2024-07-14 18:39:00.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.816913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.825602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ed920 00:22:53.568 [2024-07-14 18:39:00.827188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.827220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.835854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eb328 00:22:53.568 [2024-07-14 18:39:00.837453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.845806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb048 00:22:53.568 [2024-07-14 18:39:00.847422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.847454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.856016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e73e0 00:22:53.568 [2024-07-14 18:39:00.857534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.857592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.866631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaef0 00:22:53.568 [2024-07-14 18:39:00.868340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.868375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.876313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eff18 00:22:53.568 [2024-07-14 18:39:00.877843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.877876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.885806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5a90 00:22:53.568 [2024-07-14 18:39:00.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.887254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:53.568 [2024-07-14 18:39:00.895725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4b08 00:22:53.568 [2024-07-14 18:39:00.897063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.568 [2024-07-14 18:39:00.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.905891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fc560 00:22:53.569 [2024-07-14 18:39:00.906635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.906671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.915797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb8b8 00:22:53.569 [2024-07-14 18:39:00.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.925705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f20d8 00:22:53.569 [2024-07-14 18:39:00.925875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.925894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.936711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ed4e8 00:22:53.569 [2024-07-14 18:39:00.937596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.937628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.946547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e9168 00:22:53.569 [2024-07-14 18:39:00.948074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.948105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.956609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8618 00:22:53.569 [2024-07-14 18:39:00.958314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.958354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.968871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e38d0 00:22:53.569 [2024-07-14 18:39:00.970144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.970179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.976594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1b48 00:22:53.569 [2024-07-14 18:39:00.976959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.977020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:53.569 [2024-07-14 18:39:00.988985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fe2e8 00:22:53.569 [2024-07-14 18:39:00.991090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.569 [2024-07-14 18:39:00.991125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:00.999004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fe720 00:22:53.827 [2024-07-14 18:39:01.000419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.000472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.009416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ec840 00:22:53.827 [2024-07-14 18:39:01.009838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.009864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.019489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190efae0 00:22:53.827 [2024-07-14 18:39:01.021002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.021036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.029443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f2d80 00:22:53.827 [2024-07-14 18:39:01.029953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.029982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.041253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eff18 00:22:53.827 [2024-07-14 18:39:01.042751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.042785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.052284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0ea0 00:22:53.827 [2024-07-14 18:39:01.053064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.053093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.065372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaab8 00:22:53.827 [2024-07-14 18:39:01.066760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.827 [2024-07-14 18:39:01.066794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:53.827 [2024-07-14 18:39:01.073939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7da8 00:22:53.828 [2024-07-14 18:39:01.074208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.074233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.087138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f6890 00:22:53.828 [2024-07-14 18:39:01.087659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.087688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.097315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eb760 00:22:53.828 [2024-07-14 18:39:01.097743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.097771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.106819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e73e0 00:22:53.828 [2024-07-14 18:39:01.107198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.107222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.116628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaef0 00:22:53.828 [2024-07-14 18:39:01.117243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.117279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.127552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e95a0 00:22:53.828 [2024-07-14 18:39:01.128033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.128071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.137359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e4140 00:22:53.828 [2024-07-14 18:39:01.137785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.137814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.147758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f2d80 00:22:53.828 [2024-07-14 18:39:01.148887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.148935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.159250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eb760 00:22:53.828 [2024-07-14 18:39:01.160368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.166818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fb048 00:22:53.828 [2024-07-14 18:39:01.166926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.166945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.178772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1430 00:22:53.828 [2024-07-14 18:39:01.179481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.188835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fc128 00:22:53.828 [2024-07-14 18:39:01.189639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.189702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.198577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df550 00:22:53.828 [2024-07-14 18:39:01.199781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.199817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.208106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5658 00:22:53.828 [2024-07-14 18:39:01.209335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.209367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.218318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0ea0 00:22:53.828 [2024-07-14 18:39:01.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.219647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.228399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fe2e8 00:22:53.828 [2024-07-14 18:39:01.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.229797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.240274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f3a28 00:22:53.828 [2024-07-14 18:39:01.241259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.241292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:53.828 [2024-07-14 18:39:01.249370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f9f68 00:22:53.828 [2024-07-14 18:39:01.250655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.828 [2024-07-14 18:39:01.250717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.259693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5658 00:22:54.087 [2024-07-14 18:39:01.260237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.260277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.272925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eff18 00:22:54.087 [2024-07-14 18:39:01.274130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.274164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.281328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e95a0 00:22:54.087 [2024-07-14 18:39:01.281578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.281629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.294543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6300 00:22:54.087 [2024-07-14 18:39:01.295374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.295436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.303754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4f40 00:22:54.087 [2024-07-14 18:39:01.304792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.304825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.314282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fdeb0 00:22:54.087 [2024-07-14 18:39:01.316064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.316097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.324822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df118 00:22:54.087 [2024-07-14 18:39:01.325673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.325722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.334934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e88f8 00:22:54.087 [2024-07-14 18:39:01.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.087 [2024-07-14 18:39:01.335346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.087 [2024-07-14 18:39:01.345659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e8088 00:22:54.088 [2024-07-14 18:39:01.346036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.346060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.356288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4f40 00:22:54.088 [2024-07-14 18:39:01.357429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.357463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.366634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fc998 00:22:54.088 [2024-07-14 18:39:01.368081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.368115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.377267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5658 00:22:54.088 [2024-07-14 18:39:01.378077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.386697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaab8 00:22:54.088 [2024-07-14 18:39:01.387808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.387844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.396681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f9f68 00:22:54.088 [2024-07-14 18:39:01.396930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.396950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.406608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e84c0 00:22:54.088 [2024-07-14 18:39:01.407080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.407109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.416309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7da8 00:22:54.088 [2024-07-14 18:39:01.416748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.416783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.426192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5658 00:22:54.088 [2024-07-14 18:39:01.426709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.426770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.437482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190dece0 00:22:54.088 [2024-07-14 18:39:01.437915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.447457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6fa8 00:22:54.088 [2024-07-14 18:39:01.447877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.447912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.458393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6738 00:22:54.088 [2024-07-14 18:39:01.458868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.458902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.470142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f46d0 00:22:54.088 [2024-07-14 18:39:01.470649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.470677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.481367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190df118 00:22:54.088 [2024-07-14 18:39:01.481974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.482023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.492720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f31b8 00:22:54.088 [2024-07-14 18:39:01.493221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.493257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.088 [2024-07-14 18:39:01.502792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e95a0 00:22:54.088 [2024-07-14 18:39:01.504349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.088 [2024-07-14 18:39:01.504382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.516186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f2d80 00:22:54.347 [2024-07-14 18:39:01.517290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.517326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.523957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0630 00:22:54.347 [2024-07-14 18:39:01.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.535299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ec840 00:22:54.347 [2024-07-14 18:39:01.535994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.536046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.546928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f0788 00:22:54.347 [2024-07-14 18:39:01.548201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.548234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.553939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fc560 00:22:54.347 [2024-07-14 18:39:01.555082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.555114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.565418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e4de8 00:22:54.347 [2024-07-14 18:39:01.566140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.347 [2024-07-14 18:39:01.566169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.347 [2024-07-14 18:39:01.575374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1710 00:22:54.348 [2024-07-14 18:39:01.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.585406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e01f8 00:22:54.348 [2024-07-14 18:39:01.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.586340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.593921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1868 00:22:54.348 [2024-07-14 18:39:01.595026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.595057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.603046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e9168 00:22:54.348 [2024-07-14 18:39:01.603216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.603236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.612851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f31b8 00:22:54.348 [2024-07-14 18:39:01.613446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.613480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.622986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f6020 00:22:54.348 [2024-07-14 18:39:01.623303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.623328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.632728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e3498 00:22:54.348 [2024-07-14 18:39:01.632999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.633022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.642459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190de470 00:22:54.348 [2024-07-14 18:39:01.642758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.642787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.652475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7da8 00:22:54.348 [2024-07-14 18:39:01.652756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.652782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.663842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaab8 00:22:54.348 [2024-07-14 18:39:01.665354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.665387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.673697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f1ca0 00:22:54.348 [2024-07-14 18:39:01.675185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.675218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.683771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190de470 00:22:54.348 [2024-07-14 18:39:01.685347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.685381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.694582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e95a0 00:22:54.348 [2024-07-14 18:39:01.696297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.696334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.704950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7100 00:22:54.348 [2024-07-14 18:39:01.706556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.706617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.714476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ebb98 00:22:54.348 [2024-07-14 18:39:01.716155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.716187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.724029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e3060 00:22:54.348 [2024-07-14 18:39:01.725622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.725653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.732410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6738 00:22:54.348 [2024-07-14 18:39:01.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.732789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.742048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f9b30 00:22:54.348 [2024-07-14 18:39:01.743034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.743068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.751105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaef0 00:22:54.348 [2024-07-14 18:39:01.751339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.751359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.348 [2024-07-14 18:39:01.760977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0a68 00:22:54.348 [2024-07-14 18:39:01.761577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.348 [2024-07-14 18:39:01.761611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.771392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8618 00:22:54.608 [2024-07-14 18:39:01.772595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.772654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.781987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6b70 00:22:54.608 [2024-07-14 18:39:01.783063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.783099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.793344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f7970 00:22:54.608 [2024-07-14 18:39:01.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.794521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.800535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eb328 00:22:54.608 [2024-07-14 18:39:01.800722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.812327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f6cc8 00:22:54.608 [2024-07-14 18:39:01.813213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.813241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.821347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1b48 00:22:54.608 [2024-07-14 18:39:01.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.822424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.831152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f2510 00:22:54.608 [2024-07-14 18:39:01.832754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.832787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.842210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f0ff8 00:22:54.608 [2024-07-14 18:39:01.842864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.842899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.852816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ee5c8 00:22:54.608 [2024-07-14 18:39:01.853428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.853464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.862686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190edd58 00:22:54.608 [2024-07-14 18:39:01.863827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.863888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.872708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ee5c8 00:22:54.608 [2024-07-14 18:39:01.873356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.873389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.882743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e5ec8 00:22:54.608 [2024-07-14 18:39:01.883448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.883494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.892105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fa3a0 00:22:54.608 [2024-07-14 18:39:01.893191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.893223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.901429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e6738 00:22:54.608 [2024-07-14 18:39:01.902382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.902416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.913312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ecc78 00:22:54.608 [2024-07-14 18:39:01.914271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.914305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.922072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190ed4e8 00:22:54.608 [2024-07-14 18:39:01.923032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.608 [2024-07-14 18:39:01.923064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.608 [2024-07-14 18:39:01.932373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e1710 00:22:54.609 [2024-07-14 18:39:01.933028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.933055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.942217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e4140 00:22:54.609 [2024-07-14 18:39:01.942856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.942890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.951840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f4f40 00:22:54.609 [2024-07-14 18:39:01.952052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.952074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.964352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f8a50 00:22:54.609 [2024-07-14 18:39:01.965206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.965235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.974613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190f20d8 00:22:54.609 [2024-07-14 18:39:01.975433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.975531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.984146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eaab8 00:22:54.609 [2024-07-14 18:39:01.985482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.985557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:01.993593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e0630 00:22:54.609 [2024-07-14 18:39:01.993926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:01.993951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:02.003459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190e95a0 00:22:54.609 [2024-07-14 18:39:02.003866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:02.003896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:02.013448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190fe720 00:22:54.609 [2024-07-14 18:39:02.014301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:02.014336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.609 [2024-07-14 18:39:02.023969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0620) with pdu=0x2000190eb328 00:22:54.609 [2024-07-14 18:39:02.024399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.609 [2024-07-14 18:39:02.024440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.609 00:22:54.609 Latency(us) 00:22:54.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.609 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:54.609 nvme0n1 : 2.01 24962.58 97.51 0.00 0.00 5121.46 1899.05 14000.87 00:22:54.609 =================================================================================================================== 00:22:54.609 Total : 24962.58 97.51 0.00 0.00 5121.46 1899.05 14000.87 00:22:54.609 0 00:22:54.867 18:39:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:54.868 18:39:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:54.868 18:39:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:54.868 18:39:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:54.868 | .driver_specific 00:22:54.868 | .nvme_error 00:22:54.868 | .status_code 00:22:54.868 | .command_transient_transport_error' 00:22:55.126 18:39:02 -- host/digest.sh@71 -- # (( 196 > 0 )) 00:22:55.126 18:39:02 -- host/digest.sh@73 -- # killprocess 97418 00:22:55.126 18:39:02 -- common/autotest_common.sh@926 -- # '[' -z 97418 ']' 00:22:55.126 18:39:02 -- common/autotest_common.sh@930 -- # kill -0 97418 00:22:55.126 18:39:02 -- common/autotest_common.sh@931 -- # uname 00:22:55.126 18:39:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:55.126 18:39:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97418 00:22:55.126 killing process with pid 97418 00:22:55.126 Received shutdown signal, test time was about 2.000000 seconds 00:22:55.126 00:22:55.126 Latency(us) 00:22:55.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.126 =================================================================================================================== 00:22:55.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.126 18:39:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:55.126 18:39:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:55.126 18:39:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97418' 00:22:55.126 18:39:02 -- common/autotest_common.sh@945 -- # kill 97418 00:22:55.126 18:39:02 -- common/autotest_common.sh@950 -- # wait 97418 00:22:55.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.126 18:39:02 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:55.126 18:39:02 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.126 18:39:02 -- host/digest.sh@56 -- # rw=randwrite 00:22:55.126 18:39:02 -- host/digest.sh@56 -- # bs=131072 00:22:55.126 18:39:02 -- host/digest.sh@56 -- # qd=16 00:22:55.126 18:39:02 -- host/digest.sh@58 -- # bperfpid=97513 00:22:55.126 18:39:02 -- host/digest.sh@60 -- # waitforlisten 97513 /var/tmp/bperf.sock 00:22:55.126 18:39:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:55.126 18:39:02 -- common/autotest_common.sh@819 -- # '[' -z 97513 ']' 00:22:55.126 18:39:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.126 18:39:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:55.126 18:39:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.126 18:39:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:55.126 18:39:02 -- common/autotest_common.sh@10 -- # set +x 00:22:55.385 [2024-07-14 18:39:02.595647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:55.385 [2024-07-14 18:39:02.595919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:22:55.385 Zero copy mechanism will not be used. 00:22:55.385 =spdk_pid97513 ] 00:22:55.385 [2024-07-14 18:39:02.730702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.385 [2024-07-14 18:39:02.807622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.318 18:39:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:56.318 18:39:03 -- common/autotest_common.sh@852 -- # return 0 00:22:56.318 18:39:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.318 18:39:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.574 18:39:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.574 18:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.574 18:39:03 -- common/autotest_common.sh@10 -- # set +x 00:22:56.574 18:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.574 18:39:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.574 18:39:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.832 nvme0n1 00:22:56.832 18:39:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:56.832 18:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.832 18:39:04 -- common/autotest_common.sh@10 -- # set +x 00:22:56.832 18:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.832 18:39:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:56.832 18:39:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:56.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.832 Zero copy mechanism will not be used. 00:22:56.832 Running I/O for 2 seconds... 00:22:56.832 [2024-07-14 18:39:04.246399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:56.832 [2024-07-14 18:39:04.246715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.832 [2024-07-14 18:39:04.246761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.832 [2024-07-14 18:39:04.250759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:56.832 [2024-07-14 18:39:04.250907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.832 [2024-07-14 18:39:04.250931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.832 [2024-07-14 18:39:04.255269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:56.832 [2024-07-14 18:39:04.255426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.832 [2024-07-14 18:39:04.255450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.260681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.260795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.260818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.264621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.264733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.264754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.268540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.268686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.272574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.272713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.272734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.276513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.276788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.280486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.280732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.280790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.284639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.284769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.284790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.288743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.288841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.288862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.292904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.293017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.293037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.296909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.297020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.297041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.300856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.301030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.305021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.305186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.305207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.092 [2024-07-14 18:39:04.309238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.092 [2024-07-14 18:39:04.309458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.092 [2024-07-14 18:39:04.309478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.313691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.313963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.318403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.318579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.318616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.322535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.322687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.322709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.326495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.326653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.326674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.330400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.330543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.330577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.334418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.334605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.334627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.338486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.338660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.338683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.342629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.342857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.342894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.346569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.346762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.346813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.350683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.350846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.350867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.354789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.354889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.354923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.358832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.358959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.358979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.363042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.363182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.363204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.367164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.367305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.367326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.371291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.371435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.371456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.375530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.375838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.375863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.379633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.379825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.379849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.383685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.383817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.387555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.387722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.387744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.391554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.391728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.395420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.395560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.399375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.399511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.399531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.403323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.403462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.403482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.407379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.407672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.407697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.411307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.411542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.411601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.415332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.415480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.415501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.419322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.419454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.423278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.423387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.423407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.093 [2024-07-14 18:39:04.427303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.093 [2024-07-14 18:39:04.427412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.093 [2024-07-14 18:39:04.427432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.431274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.431420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.431440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.435359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.435501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.435522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.439474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.439761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.439784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.443341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.443639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.443662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.447389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.447553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.447613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.451425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.451588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.451612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.455474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.455652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.455673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.459447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.459626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.459648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.463481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.463699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.467599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.467734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.467755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.471551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.471807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.471831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.475517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.475854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.475906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.479492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.479693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.479715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.483446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.483644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.483665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.487428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.487621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.487643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.491384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.491527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.491560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.495304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.495442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.495462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.499318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.499474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.499495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.503508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.503778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.507642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.507975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.508007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.094 [2024-07-14 18:39:04.511990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.094 [2024-07-14 18:39:04.512177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.094 [2024-07-14 18:39:04.512200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.354 [2024-07-14 18:39:04.516698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.516827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.516851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.521421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.521623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.521670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.525855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.525983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.526005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.530352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.530545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.534831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.534992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.535013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.539172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.539386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.539407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.543622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.543876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.543919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.547954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.548096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.548134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.552202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.552379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.552400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.556627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.556750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.556771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.561049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.561176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.561197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.565294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.565440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.565461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.569725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.569881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.569919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.574580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.574809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.574839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.578668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.578852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.578890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.582814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.582964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.582985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.586938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.587062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.587082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.590920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.591029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.591049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.594894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.595005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.595025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.598916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.599055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.599077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.602885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.603041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.603062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.607040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.607281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.607302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.611061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.611259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.611279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.615151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.615292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.615313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.619047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.619168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.619189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.623021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.623139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.623160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.626945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.627056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.627076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.630890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.631029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.631049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.634961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.355 [2024-07-14 18:39:04.635103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.355 [2024-07-14 18:39:04.635124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.355 [2024-07-14 18:39:04.639133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.639354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.639376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.643048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.643319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.643382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.647109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.647224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.647244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.651063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.651217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.651238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.655140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.655278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.659121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.659237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.659258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.663229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.663366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.663387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.667459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.667644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.667668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.671642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.671927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.675621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.675824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.675862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.679473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.679669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.679691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.683395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.683513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.683533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.687267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.687412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.691155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.691284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.691304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.695107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.695247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.695268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.699058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.699199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.699219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.703198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.703419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.703440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.707163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.707367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.711186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.711327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.711348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.715168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.715301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.715322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.719127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.719252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.719273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.723484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.723680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.723702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.727619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.727760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.727783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.731493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.731734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.735737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.735987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.736052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.739934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.740229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.744186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.744306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.744329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.748345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.748475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.748496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.752614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.356 [2024-07-14 18:39:04.752747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.356 [2024-07-14 18:39:04.752769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.356 [2024-07-14 18:39:04.756770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.357 [2024-07-14 18:39:04.756870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.357 [2024-07-14 18:39:04.756905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.357 [2024-07-14 18:39:04.760988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.357 [2024-07-14 18:39:04.761145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.357 [2024-07-14 18:39:04.761167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.357 [2024-07-14 18:39:04.765113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.357 [2024-07-14 18:39:04.765272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.357 [2024-07-14 18:39:04.765293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.357 [2024-07-14 18:39:04.769253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.357 [2024-07-14 18:39:04.769474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.357 [2024-07-14 18:39:04.769509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.357 [2024-07-14 18:39:04.773477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.357 [2024-07-14 18:39:04.773900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.357 [2024-07-14 18:39:04.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.777820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.777933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.777955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.781725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.781862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.781883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.785987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.786113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.786136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.789937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.790084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.793847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.793987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.794008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.797801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.797951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.797972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.801766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.801949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.801970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.805742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.805883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.805903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.809603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.809711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.809731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.813558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.813716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.813736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.817494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.817644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.817665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.821448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.821579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.825688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.825830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.825852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.830165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.830354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.830377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.834568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.834807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.834829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.838777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.838874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.838895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.842979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.843113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.847168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.847339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.847360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.851152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.851262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.855004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.855112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.855134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.858875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.859010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.859030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.617 [2024-07-14 18:39:04.862761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.617 [2024-07-14 18:39:04.862911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.617 [2024-07-14 18:39:04.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.866767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.870703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.870829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.870850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.874686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.874816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.874838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.878707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.878849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.878869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.882695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.882815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.882835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.886816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.886910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.886931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.890958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.891100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.891120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.895213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.895403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.895425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.899470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.899765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.899790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.903696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.903828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.903850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.908150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.908299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.908320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.912631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.912836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.912873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.917091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.917219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.917240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.921455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.921629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.921651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.925892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.926016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.930136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.930305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.930326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.934393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.934626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.934648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.938594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.938731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.938752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.942682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.942818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.942838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.946946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.947139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.947176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.950940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.951054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.951075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.954934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.955045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.955066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.959223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.959391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.963516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.963749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.963772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.967983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.968228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.968251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.972285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.972432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.972454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.976581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.976713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.976734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.980657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.980813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.980834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.984747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.984845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.984866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.988866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.988981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.989001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.992961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.993102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.993123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:04.997106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:04.997302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:04.997324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.001389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.001632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.001659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.005651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.005813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.009960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.010081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.010103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.014239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.014422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.018540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.018673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.018694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.022716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.022841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.022862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.026874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.027022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.027044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.031378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.031622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.031645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.035850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.618 [2024-07-14 18:39:05.036063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.618 [2024-07-14 18:39:05.036084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.618 [2024-07-14 18:39:05.040357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.040523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.044665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.044797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.044820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.048915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.049052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.053276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.053423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.053446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.057685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.057800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.057821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.061978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.062116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.062154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.066279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.066470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.066506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.070795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.071007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.071027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.075289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.075434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.075456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.079398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.079596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.079619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.083776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.083949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.083973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.088319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.088501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.092774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.092900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.092924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.097095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.097270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.097293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.101474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.101672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.101693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.106074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.106316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.106338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.110406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.110597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.110630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.114807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.114937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.114958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.119090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.119280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.119303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.123316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.123442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.123464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.127487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.127663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.127692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.132095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.132253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.132275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.136494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.878 [2024-07-14 18:39:05.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.878 [2024-07-14 18:39:05.136727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.878 [2024-07-14 18:39:05.140913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.141104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.145116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.145283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.145304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.149356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.149495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.149517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.153554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.157806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.157916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.161945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.162069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.162090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.166192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.166338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.170437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.170735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.174874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.175140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.179076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.179257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.183300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.183425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.187452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.187683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.187706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.191663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.191755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.191778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.195789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.195938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.195958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.200006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.200166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.200187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.204428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.204624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.204657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.208685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.208871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.208892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.212813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.212968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.212989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.217008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.217142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.217179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.221418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.221620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.225504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.225642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.225663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.229741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.229858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.233950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.234088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.234109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.238163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.238336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.238357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.242524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.242742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.242763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.246645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.246784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.246805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.250949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.251076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.251096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.255119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.255288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.259268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.879 [2024-07-14 18:39:05.259386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.879 [2024-07-14 18:39:05.259407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.879 [2024-07-14 18:39:05.263359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.263471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.263503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.267494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.267711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.267734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.271714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.271911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.271948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.276067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.276306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.276327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.280377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.280564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.280586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.284660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.284794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.288796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.288962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.288983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.293026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.293178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.293198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.880 [2024-07-14 18:39:05.297282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:57.880 [2024-07-14 18:39:05.297438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.880 [2024-07-14 18:39:05.297467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.302045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.302211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.302235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.306356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.306628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.306652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.310764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.310950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.311003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.314775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.314917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.314938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.318776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.318896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.318917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.322883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.323039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.323060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.326830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.326944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.326965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.330844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.330951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.334994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.139 [2024-07-14 18:39:05.335145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.139 [2024-07-14 18:39:05.335183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.139 [2024-07-14 18:39:05.339172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.339382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.344034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.344290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.344313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.348368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.348562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.348584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.352721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.352854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.356727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.356902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.356938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.360882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.361007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.361028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.364842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.364978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.364998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.369030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.369187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.369208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.373253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.373426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.373448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.377591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.377790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.377811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.381808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.381953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.381973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.385907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.386028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.386048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.390198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.390353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.390374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.394295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.394411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.394433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.398626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.398749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.398769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.402889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.403030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.403050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.407094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.407279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.407300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.411437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.411751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.411777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.415515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.415693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.415716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.419503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.419674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.423737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.423878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.423930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.427915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.428079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.432279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.432421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.436577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.436723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.436743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.440879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.441059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.441080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.445224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.445437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.445459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.449282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.449422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.449442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.453424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.453616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.140 [2024-07-14 18:39:05.453637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.140 [2024-07-14 18:39:05.457660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.140 [2024-07-14 18:39:05.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.461732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.461875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.461911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.465625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.469671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.469816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.469837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.473787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.473967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.473988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.477884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.478103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.478123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.482069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.482230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.486105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.486255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.486276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.490310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.490471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.490506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.494457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.494637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.494658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.498692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.498812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.502914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.503054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.503075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.507016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.507204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.511347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.511608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.511631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.515461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.515662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.515685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.519648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.519770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.519793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.523844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.524015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.527993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.528146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.532175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.532288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.532309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.536672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.536819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.536840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.541177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.541355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.541378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.545791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.546029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.546051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.550300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.550481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.554973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.555092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.555113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.141 [2024-07-14 18:39:05.559516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.141 [2024-07-14 18:39:05.559684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.141 [2024-07-14 18:39:05.559709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.564264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.564380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.564404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.569098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.569255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.569278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.573591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.573757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.573779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.577935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.578101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.578122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.582061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.582262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.582283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.586241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.586410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.590401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.590584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.590605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.594656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.594804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.594824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.598751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.598879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.598899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.602840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.602952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.602973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.607033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.607225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.611205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.611377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.611399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.615369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.615659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.615689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.619655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.619827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.619850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.624062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.624192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.624213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.628052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.628222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.628244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.631985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.401 [2024-07-14 18:39:05.632147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.401 [2024-07-14 18:39:05.632168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.401 [2024-07-14 18:39:05.636084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.636203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.640247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.640388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.640410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.644401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.644585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.644606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.648956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.649208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.649260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.653546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.653710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.653733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.657904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.658015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.658038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.662378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.662567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.662591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.666655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.666765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.666787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.670863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.670982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.671003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.675022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.675177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.675199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.679088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.679273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.679295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.683270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.683478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.683522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.687407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.687610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.687634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.691429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.691630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.691653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.695459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.695721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.695760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.699414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.699572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.699610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.703476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.703652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.703675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.707675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.707825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.707848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.711984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.712198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.712219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.716298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.716551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.716572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.720540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.720706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.720726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.724671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.724773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.724793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.728845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.729019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.729040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.732984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.733101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.733122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.737166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.737291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.737312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.741292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.741430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.741452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.745558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.745734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.745755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.749663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.749871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.749892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.753763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.753904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.753924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.757774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.757908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.757929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.761834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.761995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.762016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.765877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.766016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.766036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.769887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.770017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.770037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.773954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.402 [2024-07-14 18:39:05.774097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.402 [2024-07-14 18:39:05.774117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.402 [2024-07-14 18:39:05.778123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.778310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.778331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.782280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.782506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.782527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.786449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.786611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.786633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.790410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.790558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.790578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.794746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.794909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.798846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.798999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.802880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.803008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.806850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.806995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.807015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.810901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.811065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.814991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.815216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.815237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.818911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.819067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.403 [2024-07-14 18:39:05.823306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.403 [2024-07-14 18:39:05.823449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.403 [2024-07-14 18:39:05.823472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.827457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.827700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.827725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.831939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.832072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.832095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.835957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.836063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.836084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.840118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.840283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.840305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.844244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.844415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.844437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.848481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.848721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.848743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.852638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.852781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.852802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.856812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.861020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.861194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.861215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.865243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.865356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.865377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.869474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.869629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.869649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.873741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.873877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.873897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.877878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.878043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.878063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.882069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.882287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.882308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.886284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.886427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.886449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.890447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.890613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.890634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.894773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.894934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.898917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.899048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.666 [2024-07-14 18:39:05.899069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.666 [2024-07-14 18:39:05.903007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.666 [2024-07-14 18:39:05.903120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.903157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.907539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.907781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.907806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.912144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.912310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.912333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.916310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.916529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.916552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.920393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.920570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.920592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.924561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.924719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.928696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.928836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.928857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.932793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.932942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.932962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.936956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.937083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.937104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.941056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.941227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.945282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.945464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.945485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.949505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.949736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.949758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.953621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.953816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.957851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.958002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.958024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.962059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.962222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.962244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.966384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.966545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.966566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.970511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.970650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.970671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.974704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.978829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.979008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.979029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.983003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.983220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.983241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.987187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.987332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.987354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.991317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.991454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.991475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.995657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.995800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.995821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:05.999783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:05.999938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:05.999973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.003824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.003990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.004011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.007911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.008066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.012018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.012214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.012235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.016243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.016448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.016469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.020483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.024629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.024773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.024794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.028753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.028929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.028950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.032968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.033081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.033101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.037123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.037261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.037281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.041224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.041362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.041383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.045379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.045560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.045582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.049586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.049800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.049820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.667 [2024-07-14 18:39:06.053635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.667 [2024-07-14 18:39:06.053781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.667 [2024-07-14 18:39:06.053802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.057692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.057807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.061926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.062105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.062141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.066120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.066267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.070306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.070432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.070453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.074608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.074766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.078806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.078957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.078977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.668 [2024-07-14 18:39:06.083246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.668 [2024-07-14 18:39:06.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.668 [2024-07-14 18:39:06.083486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.087939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.088246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.088291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.092385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.092525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.092549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.096837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.097024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.097048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.101251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.105701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.105821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.105843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.109999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.110177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.110199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.114215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.114382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.114403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.118362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.118605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.118627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.122570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.122708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.126541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.126677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.126698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.130720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.130857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.130877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.134959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.135099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.135120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.139335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.139460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.139494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.144112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.144277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.144299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.148791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.149012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.149033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.153341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.153649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.157832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.157964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.157985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.162280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.162400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.162421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.166879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.171309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.171452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.171476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.175674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.175785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.175808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.179951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.180117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.180150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.184151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.184345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.184384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.188602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.188809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.188830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.192858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.961 [2024-07-14 18:39:06.192997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.961 [2024-07-14 18:39:06.193018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.961 [2024-07-14 18:39:06.197078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.197217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.197238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.201609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.201771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.201792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.206041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.206188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.210595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.210692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.210715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.215009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.215156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.215179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.219303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.219467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.219501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.223841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.224106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.224145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.227998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.228233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.228255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.962 [2024-07-14 18:39:06.232182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8c0960) with pdu=0x2000190fef90 00:22:58.962 [2024-07-14 18:39:06.232311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.962 [2024-07-14 18:39:06.232331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.962 00:22:58.962 Latency(us) 00:22:58.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.962 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:58.962 nvme0n1 : 2.00 7354.62 919.33 0.00 0.00 2170.41 1519.24 10604.92 00:22:58.962 =================================================================================================================== 00:22:58.962 Total : 7354.62 919.33 0.00 0.00 2170.41 1519.24 10604.92 00:22:58.962 0 00:22:58.962 18:39:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:58.962 18:39:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:58.962 18:39:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:58.962 | .driver_specific 00:22:58.962 | .nvme_error 00:22:58.962 | .status_code 00:22:58.962 | .command_transient_transport_error' 00:22:58.962 18:39:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:59.230 18:39:06 -- host/digest.sh@71 -- # (( 474 > 0 )) 00:22:59.230 18:39:06 -- host/digest.sh@73 -- # killprocess 97513 00:22:59.230 18:39:06 -- common/autotest_common.sh@926 -- # '[' -z 97513 ']' 00:22:59.230 18:39:06 -- common/autotest_common.sh@930 -- # kill -0 97513 00:22:59.230 18:39:06 -- common/autotest_common.sh@931 -- # uname 00:22:59.230 18:39:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.230 18:39:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97513 00:22:59.230 18:39:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:59.230 18:39:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:59.230 killing process with pid 97513 00:22:59.230 18:39:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97513' 00:22:59.230 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.230 00:22:59.230 Latency(us) 00:22:59.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.230 =================================================================================================================== 00:22:59.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.231 18:39:06 -- common/autotest_common.sh@945 -- # kill 97513 00:22:59.231 18:39:06 -- common/autotest_common.sh@950 -- # wait 97513 00:22:59.489 18:39:06 -- host/digest.sh@115 -- # killprocess 97199 00:22:59.489 18:39:06 -- common/autotest_common.sh@926 -- # '[' -z 97199 ']' 00:22:59.489 18:39:06 -- common/autotest_common.sh@930 -- # kill -0 97199 00:22:59.489 18:39:06 -- common/autotest_common.sh@931 -- # uname 00:22:59.489 18:39:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.489 18:39:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97199 00:22:59.489 18:39:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:59.489 18:39:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:59.489 killing process with pid 97199 00:22:59.489 18:39:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97199' 00:22:59.489 18:39:06 -- common/autotest_common.sh@945 -- # kill 97199 00:22:59.489 18:39:06 -- common/autotest_common.sh@950 -- # wait 97199 00:22:59.747 00:22:59.747 real 0m18.169s 00:22:59.747 user 0m34.128s 00:22:59.747 sys 0m4.928s 00:22:59.747 18:39:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.747 18:39:06 -- common/autotest_common.sh@10 -- # set +x 00:22:59.747 ************************************ 00:22:59.747 END TEST nvmf_digest_error 00:22:59.747 ************************************ 00:22:59.747 18:39:06 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:59.747 18:39:06 -- host/digest.sh@139 -- # nvmftestfini 00:22:59.747 18:39:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:59.747 18:39:06 -- nvmf/common.sh@116 -- # sync 00:22:59.747 18:39:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:59.747 18:39:07 -- nvmf/common.sh@119 -- # set +e 00:22:59.747 18:39:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:59.747 18:39:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:59.747 rmmod nvme_tcp 00:22:59.747 rmmod nvme_fabrics 00:22:59.747 rmmod nvme_keyring 00:22:59.747 18:39:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:59.747 18:39:07 -- nvmf/common.sh@123 -- # set -e 00:22:59.747 18:39:07 -- nvmf/common.sh@124 -- # return 0 00:22:59.747 18:39:07 -- nvmf/common.sh@477 -- # '[' -n 97199 ']' 00:22:59.747 18:39:07 -- nvmf/common.sh@478 -- # killprocess 97199 00:22:59.747 18:39:07 -- common/autotest_common.sh@926 -- # '[' -z 97199 ']' 00:22:59.747 18:39:07 -- common/autotest_common.sh@930 -- # kill -0 97199 00:22:59.747 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (97199) - No such process 00:22:59.747 Process with pid 97199 is not found 00:22:59.747 18:39:07 -- common/autotest_common.sh@953 -- # echo 'Process with pid 97199 is not found' 00:22:59.747 18:39:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:59.747 18:39:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:59.747 18:39:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:59.747 18:39:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.747 18:39:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:59.747 18:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.747 18:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.747 18:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.747 18:39:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:59.747 00:22:59.747 real 0m36.967s 00:22:59.747 user 1m8.230s 00:22:59.747 sys 0m10.004s 00:22:59.747 18:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.747 ************************************ 00:22:59.747 END TEST nvmf_digest 00:22:59.747 18:39:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.747 ************************************ 00:22:59.747 18:39:07 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:59.747 18:39:07 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:59.747 18:39:07 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:59.747 18:39:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:59.747 18:39:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:59.747 18:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:00.005 ************************************ 00:23:00.005 START TEST nvmf_mdns_discovery 00:23:00.005 ************************************ 00:23:00.005 18:39:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:00.005 * Looking for test storage... 00:23:00.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:00.005 18:39:07 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.005 18:39:07 -- nvmf/common.sh@7 -- # uname -s 00:23:00.005 18:39:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.005 18:39:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.005 18:39:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.005 18:39:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.005 18:39:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.005 18:39:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.005 18:39:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.005 18:39:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.005 18:39:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.005 18:39:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.005 18:39:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:23:00.005 18:39:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:23:00.005 18:39:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.005 18:39:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.005 18:39:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.005 18:39:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.005 18:39:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.005 18:39:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.005 18:39:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.005 18:39:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.005 18:39:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.005 18:39:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.005 18:39:07 -- paths/export.sh@5 -- # export PATH 00:23:00.006 18:39:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.006 18:39:07 -- nvmf/common.sh@46 -- # : 0 00:23:00.006 18:39:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:00.006 18:39:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:00.006 18:39:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:00.006 18:39:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.006 18:39:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.006 18:39:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:00.006 18:39:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:00.006 18:39:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:00.006 18:39:07 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:00.006 18:39:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:00.006 18:39:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.006 18:39:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:00.006 18:39:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:00.006 18:39:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:00.006 18:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.006 18:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.006 18:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.006 18:39:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:00.006 18:39:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:00.006 18:39:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:00.006 18:39:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:00.006 18:39:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:00.006 18:39:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:00.006 18:39:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.006 18:39:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.006 18:39:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:00.006 18:39:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:00.006 18:39:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:00.006 18:39:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:00.006 18:39:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:00.006 18:39:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.006 18:39:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:00.006 18:39:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:00.006 18:39:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:00.006 18:39:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:00.006 18:39:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:00.006 18:39:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:00.006 Cannot find device "nvmf_tgt_br" 00:23:00.006 18:39:07 -- nvmf/common.sh@154 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.006 Cannot find device "nvmf_tgt_br2" 00:23:00.006 18:39:07 -- nvmf/common.sh@155 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:00.006 18:39:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:00.006 Cannot find device "nvmf_tgt_br" 00:23:00.006 18:39:07 -- nvmf/common.sh@157 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:00.006 Cannot find device "nvmf_tgt_br2" 00:23:00.006 18:39:07 -- nvmf/common.sh@158 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:00.006 18:39:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:00.006 18:39:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.006 18:39:07 -- nvmf/common.sh@161 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.006 18:39:07 -- nvmf/common.sh@162 -- # true 00:23:00.006 18:39:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.006 18:39:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.006 18:39:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.264 18:39:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.264 18:39:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.264 18:39:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.264 18:39:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.264 18:39:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:00.264 18:39:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:00.264 18:39:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:00.264 18:39:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:00.264 18:39:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:00.264 18:39:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:00.264 18:39:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.264 18:39:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.264 18:39:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.264 18:39:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:00.264 18:39:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:00.264 18:39:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.264 18:39:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.264 18:39:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.264 18:39:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.264 18:39:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.264 18:39:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:00.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:23:00.264 00:23:00.264 --- 10.0.0.2 ping statistics --- 00:23:00.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.264 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:00.264 18:39:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:00.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:00.264 00:23:00.264 --- 10.0.0.3 ping statistics --- 00:23:00.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.265 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:00.265 18:39:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:00.265 00:23:00.265 --- 10.0.0.1 ping statistics --- 00:23:00.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.265 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:00.265 18:39:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.265 18:39:07 -- nvmf/common.sh@421 -- # return 0 00:23:00.265 18:39:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:00.265 18:39:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.265 18:39:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:00.265 18:39:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:00.265 18:39:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.265 18:39:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:00.265 18:39:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:00.265 18:39:07 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:00.265 18:39:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:00.265 18:39:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:00.265 18:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:00.265 18:39:07 -- nvmf/common.sh@469 -- # nvmfpid=97806 00:23:00.265 18:39:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:00.265 18:39:07 -- nvmf/common.sh@470 -- # waitforlisten 97806 00:23:00.265 18:39:07 -- common/autotest_common.sh@819 -- # '[' -z 97806 ']' 00:23:00.265 18:39:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.265 18:39:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.265 18:39:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.265 18:39:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.265 18:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:00.522 [2024-07-14 18:39:07.690395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:00.523 [2024-07-14 18:39:07.690479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.523 [2024-07-14 18:39:07.832993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.523 [2024-07-14 18:39:07.889124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:00.523 [2024-07-14 18:39:07.889322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.523 [2024-07-14 18:39:07.889338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.523 [2024-07-14 18:39:07.889359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.523 [2024-07-14 18:39:07.889385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.455 18:39:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.455 18:39:08 -- common/autotest_common.sh@852 -- # return 0 00:23:01.455 18:39:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:01.455 18:39:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 18:39:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 [2024-07-14 18:39:08.751813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 [2024-07-14 18:39:08.763980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 null0 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 null1 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 null2 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 null3 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:01.455 18:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 18:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@47 -- # hostpid=97856 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:01.455 18:39:08 -- host/mdns_discovery.sh@48 -- # waitforlisten 97856 /tmp/host.sock 00:23:01.455 18:39:08 -- common/autotest_common.sh@819 -- # '[' -z 97856 ']' 00:23:01.455 18:39:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:23:01.455 18:39:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:01.455 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:01.455 18:39:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:01.455 18:39:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:01.455 18:39:08 -- common/autotest_common.sh@10 -- # set +x 00:23:01.455 [2024-07-14 18:39:08.868760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:01.455 [2024-07-14 18:39:08.868839] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97856 ] 00:23:01.714 [2024-07-14 18:39:09.010347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.714 [2024-07-14 18:39:09.089753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:01.714 [2024-07-14 18:39:09.089935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.644 18:39:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.644 18:39:09 -- common/autotest_common.sh@852 -- # return 0 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@57 -- # avahipid=97884 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:02.644 18:39:09 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:02.644 Process 981 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:02.644 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:02.644 Successfully dropped root privileges. 00:23:02.644 avahi-daemon 0.8 starting up. 00:23:02.644 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:02.644 Successfully called chroot(). 00:23:02.644 Successfully dropped remaining capabilities. 00:23:02.644 No service file found in /etc/avahi/services. 00:23:03.575 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:03.575 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:03.576 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:03.576 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:03.576 Network interface enumeration completed. 00:23:03.576 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:23:03.576 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:03.576 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:23:03.576 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:03.576 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4039239878. 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:03.576 18:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.576 18:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:03.576 18:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:03.576 18:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.576 18:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:03.576 18:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:03.576 18:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@68 -- # sort 00:23:03.576 18:39:10 -- host/mdns_discovery.sh@68 -- # xargs 00:23:03.576 18:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:03.576 18:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.833 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.833 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@64 -- # sort 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@64 -- # xargs 00:23:03.833 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:03.833 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.833 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.833 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.833 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.833 18:39:11 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:03.833 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # sort 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # xargs 00:23:03.834 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@64 -- # sort 00:23:03.834 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@64 -- # xargs 00:23:03.834 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.834 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:03.834 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.834 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.834 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.834 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.834 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # sort 00:23:03.834 18:39:11 -- host/mdns_discovery.sh@68 -- # xargs 00:23:03.834 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 [2024-07-14 18:39:11.278073] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@64 -- # sort 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@64 -- # xargs 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.091 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.091 [2024-07-14 18:39:11.368784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.091 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.091 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.091 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.091 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.091 18:39:11 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:04.091 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.091 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.092 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.092 18:39:11 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:04.092 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.092 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.092 [2024-07-14 18:39:11.408766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:04.092 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.092 18:39:11 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:04.092 18:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.092 18:39:11 -- common/autotest_common.sh@10 -- # set +x 00:23:04.092 [2024-07-14 18:39:11.416742] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:04.092 18:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.092 18:39:11 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97936 00:23:04.092 18:39:11 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:04.092 18:39:11 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:05.024 [2024-07-14 18:39:12.178075] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:05.024 Established under name 'CDC' 00:23:05.282 [2024-07-14 18:39:12.578105] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:05.282 [2024-07-14 18:39:12.578145] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:05.282 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:05.282 cookie is 0 00:23:05.282 is_local: 1 00:23:05.282 our_own: 0 00:23:05.282 wide_area: 0 00:23:05.282 multicast: 1 00:23:05.282 cached: 1 00:23:05.282 [2024-07-14 18:39:12.678082] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:05.282 [2024-07-14 18:39:12.678101] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:05.282 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:05.282 cookie is 0 00:23:05.282 is_local: 1 00:23:05.282 our_own: 0 00:23:05.282 wide_area: 0 00:23:05.282 multicast: 1 00:23:05.282 cached: 1 00:23:06.222 [2024-07-14 18:39:13.587288] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:06.222 [2024-07-14 18:39:13.587316] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:06.222 [2024-07-14 18:39:13.587334] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:06.480 [2024-07-14 18:39:13.673381] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:06.480 [2024-07-14 18:39:13.687033] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:06.480 [2024-07-14 18:39:13.687053] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:06.480 [2024-07-14 18:39:13.687083] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:06.480 [2024-07-14 18:39:13.735997] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:06.480 [2024-07-14 18:39:13.736038] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:06.480 [2024-07-14 18:39:13.773237] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:06.480 [2024-07-14 18:39:13.827982] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:06.480 [2024-07-14 18:39:13.828009] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:09.009 18:39:16 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:09.009 18:39:16 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:09.009 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.009 18:39:16 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:09.009 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.009 18:39:16 -- host/mdns_discovery.sh@80 -- # sort 00:23:09.009 18:39:16 -- host/mdns_discovery.sh@80 -- # xargs 00:23:09.268 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@76 -- # sort 00:23:09.268 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.268 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@76 -- # xargs 00:23:09.268 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:09.268 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@68 -- # sort 00:23:09.268 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@68 -- # xargs 00:23:09.268 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@64 -- # sort 00:23:09.268 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.268 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@64 -- # xargs 00:23:09.268 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:09.268 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:09.268 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 18:39:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:09.268 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:09.526 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.526 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:09.526 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:09.526 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.526 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.526 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:09.526 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.526 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.526 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:09.526 18:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.526 18:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:09.526 18:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.526 18:39:16 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:10.460 18:39:17 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:10.460 18:39:17 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.460 18:39:17 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:10.460 18:39:17 -- host/mdns_discovery.sh@64 -- # sort 00:23:10.460 18:39:17 -- host/mdns_discovery.sh@64 -- # xargs 00:23:10.460 18:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.460 18:39:17 -- common/autotest_common.sh@10 -- # set +x 00:23:10.718 18:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:10.718 18:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.718 18:39:17 -- common/autotest_common.sh@10 -- # set +x 00:23:10.718 18:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:10.718 18:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.718 18:39:17 -- common/autotest_common.sh@10 -- # set +x 00:23:10.718 [2024-07-14 18:39:17.980021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.718 [2024-07-14 18:39:17.981225] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:10.718 [2024-07-14 18:39:17.981263] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.718 [2024-07-14 18:39:17.981300] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:10.718 [2024-07-14 18:39:17.981314] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:10.718 18:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.718 18:39:17 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:10.718 18:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.718 18:39:17 -- common/autotest_common.sh@10 -- # set +x 00:23:10.719 [2024-07-14 18:39:17.987924] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:10.719 [2024-07-14 18:39:17.988208] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:10.719 [2024-07-14 18:39:17.988257] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:10.719 18:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.719 18:39:17 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:10.719 [2024-07-14 18:39:18.119288] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:10.719 [2024-07-14 18:39:18.119501] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:10.977 [2024-07-14 18:39:18.180671] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:10.977 [2024-07-14 18:39:18.180716] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:10.977 [2024-07-14 18:39:18.180723] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.977 [2024-07-14 18:39:18.180742] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.977 [2024-07-14 18:39:18.180786] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:10.977 [2024-07-14 18:39:18.180796] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:10.977 [2024-07-14 18:39:18.180801] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:10.977 [2024-07-14 18:39:18.180814] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:10.977 [2024-07-14 18:39:18.226395] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:10.977 [2024-07-14 18:39:18.226418] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.977 [2024-07-14 18:39:18.227412] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:10.977 [2024-07-14 18:39:18.227459] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:11.926 18:39:18 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:11.926 18:39:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:11.926 18:39:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.926 18:39:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:11.926 18:39:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:11.926 18:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:18 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@64 -- # sort 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@64 -- # xargs 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # xargs 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@72 -- # xargs 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 [2024-07-14 18:39:19.289185] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:11.926 [2024-07-14 18:39:19.289221] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.926 [2024-07-14 18:39:19.289256] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:11.926 [2024-07-14 18:39:19.289270] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:11.926 18:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.926 18:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.926 [2024-07-14 18:39:19.297188] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:11.926 [2024-07-14 18:39:19.297244] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:11.926 [2024-07-14 18:39:19.297930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.926 [2024-07-14 18:39:19.297963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.926 [2024-07-14 18:39:19.297993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.926 [2024-07-14 18:39:19.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.926 [2024-07-14 18:39:19.298013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.926 [2024-07-14 18:39:19.298022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.926 [2024-07-14 18:39:19.298032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.926 [2024-07-14 18:39:19.298041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.926 [2024-07-14 18:39:19.298050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:11.926 18:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.926 18:39:19 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:11.926 [2024-07-14 18:39:19.304773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.926 [2024-07-14 18:39:19.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-14 18:39:19.304821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.927 [2024-07-14 18:39:19.304830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-14 18:39:19.304840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.927 [2024-07-14 18:39:19.304849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-14 18:39:19.304859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.927 [2024-07-14 18:39:19.304882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-14 18:39:19.304891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.307888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.314738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.317925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:11.927 [2024-07-14 18:39:19.318074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.318123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.318139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:11.927 [2024-07-14 18:39:19.318149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.318166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.318182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.318190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.318200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:11.927 [2024-07-14 18:39:19.318234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.324750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:11.927 [2024-07-14 18:39:19.324854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.324900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.324915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:11.927 [2024-07-14 18:39:19.324925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.324940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.324954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.324962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.324970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:11.927 [2024-07-14 18:39:19.324984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.328005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:11.927 [2024-07-14 18:39:19.328116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.328160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.328175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:11.927 [2024-07-14 18:39:19.328185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.328200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.328229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.328239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.328248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:11.927 [2024-07-14 18:39:19.328261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.334802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:11.927 [2024-07-14 18:39:19.334905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.334949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.334964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:11.927 [2024-07-14 18:39:19.334974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.334988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.335001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.335010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.335018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:11.927 [2024-07-14 18:39:19.335031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.338069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:11.927 [2024-07-14 18:39:19.338161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.338205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.338220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:11.927 [2024-07-14 18:39:19.338229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.338244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.338274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.338284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.338293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:11.927 [2024-07-14 18:39:19.338306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.344876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:11.927 [2024-07-14 18:39:19.344977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.345023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.345039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:11.927 [2024-07-14 18:39:19.345048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.345064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.345077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.345086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.345094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:11.927 [2024-07-14 18:39:19.345108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.927 [2024-07-14 18:39:19.348133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:11.927 [2024-07-14 18:39:19.348231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.348278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.927 [2024-07-14 18:39:19.348293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:11.927 [2024-07-14 18:39:19.348303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:11.927 [2024-07-14 18:39:19.348319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:11.927 [2024-07-14 18:39:19.348349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:11.927 [2024-07-14 18:39:19.348359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:11.927 [2024-07-14 18:39:19.348368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:11.927 [2024-07-14 18:39:19.348382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.354930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.187 [2024-07-14 18:39:19.355029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.355075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.355106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.187 [2024-07-14 18:39:19.355116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.187 [2024-07-14 18:39:19.355131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.187 [2024-07-14 18:39:19.355145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.187 [2024-07-14 18:39:19.355153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.187 [2024-07-14 18:39:19.355161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.187 [2024-07-14 18:39:19.355175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.358182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.187 [2024-07-14 18:39:19.358276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.358320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.358335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.187 [2024-07-14 18:39:19.358344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.187 [2024-07-14 18:39:19.358359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.187 [2024-07-14 18:39:19.358389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.187 [2024-07-14 18:39:19.358400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.187 [2024-07-14 18:39:19.358408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.187 [2024-07-14 18:39:19.358422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.364982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.187 [2024-07-14 18:39:19.365076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.365120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.365135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.187 [2024-07-14 18:39:19.365144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.187 [2024-07-14 18:39:19.365159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.187 [2024-07-14 18:39:19.365172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.187 [2024-07-14 18:39:19.365180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.187 [2024-07-14 18:39:19.365188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.187 [2024-07-14 18:39:19.365202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.368230] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.187 [2024-07-14 18:39:19.368321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.368365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.368380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.187 [2024-07-14 18:39:19.368389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.187 [2024-07-14 18:39:19.368404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.187 [2024-07-14 18:39:19.368432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.187 [2024-07-14 18:39:19.368442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.187 [2024-07-14 18:39:19.368450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.187 [2024-07-14 18:39:19.368479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.375032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.187 [2024-07-14 18:39:19.375126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.375170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.375186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.187 [2024-07-14 18:39:19.375195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.187 [2024-07-14 18:39:19.375210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.187 [2024-07-14 18:39:19.375223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.187 [2024-07-14 18:39:19.375231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.187 [2024-07-14 18:39:19.375239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.187 [2024-07-14 18:39:19.375252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.187 [2024-07-14 18:39:19.378278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.187 [2024-07-14 18:39:19.378371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.187 [2024-07-14 18:39:19.378416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.378431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.188 [2024-07-14 18:39:19.378441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.378455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.378485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.378495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.378503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.188 [2024-07-14 18:39:19.378538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.385084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.188 [2024-07-14 18:39:19.385185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.385230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.385245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.188 [2024-07-14 18:39:19.385255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.385270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.385283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.385292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.385300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.188 [2024-07-14 18:39:19.385314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.388330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.188 [2024-07-14 18:39:19.388429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.388475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.388491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.188 [2024-07-14 18:39:19.388500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.388534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.388606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.388619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.388628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.188 [2024-07-14 18:39:19.388642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.395136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.188 [2024-07-14 18:39:19.395231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.395275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.395291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.188 [2024-07-14 18:39:19.395300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.395315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.395328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.395336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.395345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.188 [2024-07-14 18:39:19.395358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.398383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.188 [2024-07-14 18:39:19.398492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.398572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.398589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.188 [2024-07-14 18:39:19.398599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.398630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.398671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.398682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.398691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.188 [2024-07-14 18:39:19.398705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.405186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.188 [2024-07-14 18:39:19.405280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.405324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.405340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.188 [2024-07-14 18:39:19.405349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.405364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.405377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.405385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.405393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.188 [2024-07-14 18:39:19.405407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.408433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.188 [2024-07-14 18:39:19.408585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.408631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.408646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.188 [2024-07-14 18:39:19.408657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.408672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.408703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.408713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.408722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.188 [2024-07-14 18:39:19.408736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.415235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.188 [2024-07-14 18:39:19.415328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.415372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.415387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.188 [2024-07-14 18:39:19.415397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.415411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.415424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.415432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.415455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.188 [2024-07-14 18:39:19.415468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.418550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.188 [2024-07-14 18:39:19.418647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.418692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.418707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.188 [2024-07-14 18:39:19.418717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.418732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.418771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.418782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.418791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.188 [2024-07-14 18:39:19.418820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.425286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:12.188 [2024-07-14 18:39:19.425382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.425426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.425442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e24760 with addr=10.0.0.3, port=4420 00:23:12.188 [2024-07-14 18:39:19.425451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24760 is same with the state(5) to be set 00:23:12.188 [2024-07-14 18:39:19.425466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e24760 (9): Bad file descriptor 00:23:12.188 [2024-07-14 18:39:19.425479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:12.188 [2024-07-14 18:39:19.425487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:12.188 [2024-07-14 18:39:19.425496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:12.188 [2024-07-14 18:39:19.425542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.188 [2024-07-14 18:39:19.428603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.188 [2024-07-14 18:39:19.428694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.428737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.188 [2024-07-14 18:39:19.428752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200b2c0 with addr=10.0.0.2, port=4420 00:23:12.189 [2024-07-14 18:39:19.428761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b2c0 is same with the state(5) to be set 00:23:12.189 [2024-07-14 18:39:19.428776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b2c0 (9): Bad file descriptor 00:23:12.189 [2024-07-14 18:39:19.428818] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:12.189 [2024-07-14 18:39:19.428838] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.189 [2024-07-14 18:39:19.428857] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.189 [2024-07-14 18:39:19.428891] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:12.189 [2024-07-14 18:39:19.428907] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:12.189 [2024-07-14 18:39:19.428920] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:12.189 [2024-07-14 18:39:19.428952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.189 [2024-07-14 18:39:19.428964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:12.189 [2024-07-14 18:39:19.428973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.189 [2024-07-14 18:39:19.428995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.189 [2024-07-14 18:39:19.514904] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.189 [2024-07-14 18:39:19.514985] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:13.123 18:39:20 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:13.123 18:39:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.123 18:39:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:13.123 18:39:20 -- host/mdns_discovery.sh@68 -- # sort 00:23:13.123 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.123 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.123 18:39:20 -- host/mdns_discovery.sh@68 -- # xargs 00:23:13.124 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:13.124 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.124 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:13.124 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:13.124 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.124 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:13.124 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:13.124 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.124 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:13.124 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:13.124 18:39:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:13.124 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.124 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.124 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.382 18:39:20 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:13.382 18:39:20 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:13.382 18:39:20 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:13.382 18:39:20 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:13.382 18:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.382 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.382 18:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.382 18:39:20 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:13.382 [2024-07-14 18:39:20.678104] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@80 -- # sort 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@80 -- # xargs 00:23:14.317 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.317 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.317 18:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:14.317 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.317 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@68 -- # sort 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@68 -- # xargs 00:23:14.317 18:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.317 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@64 -- # sort 00:23:14.317 18:39:21 -- host/mdns_discovery.sh@64 -- # xargs 00:23:14.317 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.317 18:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:14.575 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.575 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:14.575 18:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:14.575 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.575 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.575 18:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:14.575 18:39:21 -- common/autotest_common.sh@640 -- # local es=0 00:23:14.575 18:39:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:14.575 18:39:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:14.575 18:39:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:14.575 18:39:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:14.575 18:39:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:14.575 18:39:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:14.575 18:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.575 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:23:14.575 [2024-07-14 18:39:21.827624] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:14.575 2024/07/14 18:39:21 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:14.575 request: 00:23:14.575 { 00:23:14.575 "method": "bdev_nvme_start_mdns_discovery", 00:23:14.575 "params": { 00:23:14.575 "name": "mdns", 00:23:14.575 "svcname": "_nvme-disc._http", 00:23:14.575 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:14.575 } 00:23:14.575 } 00:23:14.575 Got JSON-RPC error response 00:23:14.575 GoRPCClient: error on JSON-RPC call 00:23:14.575 18:39:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:14.575 18:39:21 -- common/autotest_common.sh@643 -- # es=1 00:23:14.575 18:39:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:14.575 18:39:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:14.575 18:39:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:14.575 18:39:21 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:14.834 [2024-07-14 18:39:22.216180] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:15.092 [2024-07-14 18:39:22.316182] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:15.092 [2024-07-14 18:39:22.416182] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:15.093 [2024-07-14 18:39:22.416207] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:15.093 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:15.093 cookie is 0 00:23:15.093 is_local: 1 00:23:15.093 our_own: 0 00:23:15.093 wide_area: 0 00:23:15.093 multicast: 1 00:23:15.093 cached: 1 00:23:15.093 [2024-07-14 18:39:22.516185] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:15.093 [2024-07-14 18:39:22.516215] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:15.093 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:15.093 cookie is 0 00:23:15.093 is_local: 1 00:23:15.093 our_own: 0 00:23:15.093 wide_area: 0 00:23:15.093 multicast: 1 00:23:15.093 cached: 1 00:23:16.026 [2024-07-14 18:39:23.429479] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:16.026 [2024-07-14 18:39:23.429530] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:16.026 [2024-07-14 18:39:23.429548] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:16.283 [2024-07-14 18:39:23.515677] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:16.283 [2024-07-14 18:39:23.529482] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:16.283 [2024-07-14 18:39:23.529523] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:16.283 [2024-07-14 18:39:23.529540] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.283 [2024-07-14 18:39:23.585876] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:16.283 [2024-07-14 18:39:23.585920] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:16.283 [2024-07-14 18:39:23.617224] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:16.283 [2024-07-14 18:39:23.683438] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:16.283 [2024-07-14 18:39:23.683468] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@80 -- # sort 00:23:19.564 18:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.564 18:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@80 -- # xargs 00:23:19.564 18:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.564 18:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.564 18:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@76 -- # sort 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@76 -- # xargs 00:23:19.564 18:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.564 18:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:19.564 18:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:19.564 18:39:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:19.822 18:39:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:19.822 18:39:27 -- common/autotest_common.sh@640 -- # local es=0 00:23:19.822 18:39:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:19.822 18:39:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:19.822 18:39:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.822 18:39:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:19.822 18:39:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.822 18:39:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:19.822 18:39:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.822 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.822 [2024-07-14 18:39:27.021804] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:19.822 2024/07/14 18:39:27 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:19.822 request: 00:23:19.822 { 00:23:19.822 "method": "bdev_nvme_start_mdns_discovery", 00:23:19.822 "params": { 00:23:19.822 "name": "cdc", 00:23:19.822 "svcname": "_nvme-disc._tcp", 00:23:19.822 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:19.822 } 00:23:19.822 } 00:23:19.822 Got JSON-RPC error response 00:23:19.822 GoRPCClient: error on JSON-RPC call 00:23:19.822 18:39:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:19.822 18:39:27 -- common/autotest_common.sh@643 -- # es=1 00:23:19.822 18:39:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:19.822 18:39:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:19.822 18:39:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@76 -- # sort 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@76 -- # xargs 00:23:19.822 18:39:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.822 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.822 18:39:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.822 18:39:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.822 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@64 -- # sort 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@64 -- # xargs 00:23:19.822 18:39:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:19.822 18:39:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.822 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.822 18:39:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@197 -- # kill 97856 00:23:19.822 18:39:27 -- host/mdns_discovery.sh@200 -- # wait 97856 00:23:20.080 [2024-07-14 18:39:27.256229] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:20.080 18:39:27 -- host/mdns_discovery.sh@201 -- # kill 97936 00:23:20.080 Got SIGTERM, quitting. 00:23:20.080 18:39:27 -- host/mdns_discovery.sh@202 -- # kill 97884 00:23:20.080 18:39:27 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:20.080 18:39:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:20.080 18:39:27 -- nvmf/common.sh@116 -- # sync 00:23:20.080 Got SIGTERM, quitting. 00:23:20.080 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:20.080 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:20.080 avahi-daemon 0.8 exiting. 00:23:20.080 18:39:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:20.080 18:39:27 -- nvmf/common.sh@119 -- # set +e 00:23:20.081 18:39:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:20.081 18:39:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:20.081 rmmod nvme_tcp 00:23:20.081 rmmod nvme_fabrics 00:23:20.081 rmmod nvme_keyring 00:23:20.081 18:39:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:20.081 18:39:27 -- nvmf/common.sh@123 -- # set -e 00:23:20.081 18:39:27 -- nvmf/common.sh@124 -- # return 0 00:23:20.081 18:39:27 -- nvmf/common.sh@477 -- # '[' -n 97806 ']' 00:23:20.081 18:39:27 -- nvmf/common.sh@478 -- # killprocess 97806 00:23:20.081 18:39:27 -- common/autotest_common.sh@926 -- # '[' -z 97806 ']' 00:23:20.081 18:39:27 -- common/autotest_common.sh@930 -- # kill -0 97806 00:23:20.081 18:39:27 -- common/autotest_common.sh@931 -- # uname 00:23:20.081 18:39:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:20.081 18:39:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97806 00:23:20.081 killing process with pid 97806 00:23:20.081 18:39:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:20.081 18:39:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:20.081 18:39:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97806' 00:23:20.081 18:39:27 -- common/autotest_common.sh@945 -- # kill 97806 00:23:20.081 18:39:27 -- common/autotest_common.sh@950 -- # wait 97806 00:23:20.338 18:39:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:20.338 18:39:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:20.338 18:39:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:20.338 18:39:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.338 18:39:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:20.338 18:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.338 18:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.338 18:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.338 18:39:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:20.338 00:23:20.338 real 0m20.560s 00:23:20.338 user 0m40.277s 00:23:20.338 sys 0m2.016s 00:23:20.338 18:39:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.338 ************************************ 00:23:20.338 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:20.338 END TEST nvmf_mdns_discovery 00:23:20.338 ************************************ 00:23:20.597 18:39:27 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:20.597 18:39:27 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:20.597 18:39:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:20.597 18:39:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:20.597 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:23:20.597 ************************************ 00:23:20.597 START TEST nvmf_multipath 00:23:20.597 ************************************ 00:23:20.597 18:39:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:20.597 * Looking for test storage... 00:23:20.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:20.597 18:39:27 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.597 18:39:27 -- nvmf/common.sh@7 -- # uname -s 00:23:20.597 18:39:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.597 18:39:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.597 18:39:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.597 18:39:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.597 18:39:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.597 18:39:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.597 18:39:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.597 18:39:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.597 18:39:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.597 18:39:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:23:20.597 18:39:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:23:20.597 18:39:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.597 18:39:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.597 18:39:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.597 18:39:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.597 18:39:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.597 18:39:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.597 18:39:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.597 18:39:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.597 18:39:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.597 18:39:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.597 18:39:27 -- paths/export.sh@5 -- # export PATH 00:23:20.597 18:39:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.597 18:39:27 -- nvmf/common.sh@46 -- # : 0 00:23:20.597 18:39:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:20.597 18:39:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:20.597 18:39:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:20.597 18:39:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.597 18:39:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.597 18:39:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:20.597 18:39:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:20.597 18:39:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:20.597 18:39:27 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:20.597 18:39:27 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:20.597 18:39:27 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.597 18:39:27 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:20.597 18:39:27 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.597 18:39:27 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:20.597 18:39:27 -- host/multipath.sh@30 -- # nvmftestinit 00:23:20.597 18:39:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:20.597 18:39:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.597 18:39:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:20.597 18:39:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:20.597 18:39:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:20.597 18:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.597 18:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.597 18:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.597 18:39:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:20.597 18:39:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:20.597 18:39:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.597 18:39:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.597 18:39:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:20.597 18:39:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:20.597 18:39:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.598 18:39:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.598 18:39:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.598 18:39:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.598 18:39:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.598 18:39:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.598 18:39:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.598 18:39:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.598 18:39:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:20.598 18:39:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:20.598 Cannot find device "nvmf_tgt_br" 00:23:20.598 18:39:27 -- nvmf/common.sh@154 -- # true 00:23:20.598 18:39:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.598 Cannot find device "nvmf_tgt_br2" 00:23:20.598 18:39:27 -- nvmf/common.sh@155 -- # true 00:23:20.598 18:39:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:20.598 18:39:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:20.598 Cannot find device "nvmf_tgt_br" 00:23:20.598 18:39:27 -- nvmf/common.sh@157 -- # true 00:23:20.598 18:39:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:20.598 Cannot find device "nvmf_tgt_br2" 00:23:20.598 18:39:27 -- nvmf/common.sh@158 -- # true 00:23:20.598 18:39:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:20.598 18:39:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:20.909 18:39:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.909 18:39:28 -- nvmf/common.sh@161 -- # true 00:23:20.909 18:39:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.909 18:39:28 -- nvmf/common.sh@162 -- # true 00:23:20.909 18:39:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.909 18:39:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.909 18:39:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.909 18:39:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.909 18:39:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.909 18:39:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.909 18:39:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.909 18:39:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:20.909 18:39:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:20.909 18:39:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:20.909 18:39:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:20.909 18:39:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:20.909 18:39:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:20.909 18:39:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.909 18:39:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.909 18:39:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.909 18:39:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:20.909 18:39:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:20.909 18:39:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.909 18:39:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.909 18:39:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.909 18:39:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.909 18:39:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.909 18:39:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:20.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:20.909 00:23:20.909 --- 10.0.0.2 ping statistics --- 00:23:20.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.909 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:20.909 18:39:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:20.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:20.909 00:23:20.909 --- 10.0.0.3 ping statistics --- 00:23:20.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.909 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:20.909 18:39:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:20.909 00:23:20.909 --- 10.0.0.1 ping statistics --- 00:23:20.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.909 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:20.909 18:39:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.909 18:39:28 -- nvmf/common.sh@421 -- # return 0 00:23:20.909 18:39:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:20.909 18:39:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.909 18:39:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:20.909 18:39:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:20.909 18:39:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.909 18:39:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:20.909 18:39:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:20.909 18:39:28 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:20.909 18:39:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:20.909 18:39:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:20.909 18:39:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.909 18:39:28 -- nvmf/common.sh@469 -- # nvmfpid=98449 00:23:20.909 18:39:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:20.909 18:39:28 -- nvmf/common.sh@470 -- # waitforlisten 98449 00:23:20.909 18:39:28 -- common/autotest_common.sh@819 -- # '[' -z 98449 ']' 00:23:20.909 18:39:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.909 18:39:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:20.909 18:39:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.909 18:39:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:20.909 18:39:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.909 [2024-07-14 18:39:28.295131] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:20.909 [2024-07-14 18:39:28.295400] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.168 [2024-07-14 18:39:28.439468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.168 [2024-07-14 18:39:28.530878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:21.168 [2024-07-14 18:39:28.531373] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.168 [2024-07-14 18:39:28.531580] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.168 [2024-07-14 18:39:28.531754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.168 [2024-07-14 18:39:28.532086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.168 [2024-07-14 18:39:28.532100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.098 18:39:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:22.098 18:39:29 -- common/autotest_common.sh@852 -- # return 0 00:23:22.098 18:39:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:22.098 18:39:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:22.098 18:39:29 -- common/autotest_common.sh@10 -- # set +x 00:23:22.098 18:39:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.098 18:39:29 -- host/multipath.sh@33 -- # nvmfapp_pid=98449 00:23:22.098 18:39:29 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:22.355 [2024-07-14 18:39:29.523705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.355 18:39:29 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:22.355 Malloc0 00:23:22.612 18:39:29 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:22.612 18:39:30 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:22.869 18:39:30 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.126 [2024-07-14 18:39:30.454313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.127 18:39:30 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.388 [2024-07-14 18:39:30.674482] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.388 18:39:30 -- host/multipath.sh@44 -- # bdevperf_pid=98553 00:23:23.388 18:39:30 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:23.388 18:39:30 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.388 18:39:30 -- host/multipath.sh@47 -- # waitforlisten 98553 /var/tmp/bdevperf.sock 00:23:23.388 18:39:30 -- common/autotest_common.sh@819 -- # '[' -z 98553 ']' 00:23:23.388 18:39:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.388 18:39:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:23.388 18:39:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.388 18:39:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:23.388 18:39:30 -- common/autotest_common.sh@10 -- # set +x 00:23:24.322 18:39:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:24.322 18:39:31 -- common/autotest_common.sh@852 -- # return 0 00:23:24.322 18:39:31 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:24.579 18:39:31 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:24.837 Nvme0n1 00:23:25.095 18:39:32 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:25.352 Nvme0n1 00:23:25.352 18:39:32 -- host/multipath.sh@78 -- # sleep 1 00:23:25.352 18:39:32 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:26.282 18:39:33 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:26.282 18:39:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.538 18:39:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:26.795 18:39:34 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:26.795 18:39:34 -- host/multipath.sh@65 -- # dtrace_pid=98642 00:23:26.795 18:39:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:26.795 18:39:34 -- host/multipath.sh@66 -- # sleep 6 00:23:33.351 18:39:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:33.351 18:39:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:33.351 18:39:40 -- host/multipath.sh@67 -- # active_port=4421 00:23:33.351 18:39:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.351 Attaching 4 probes... 00:23:33.351 @path[10.0.0.2, 4421]: 18811 00:23:33.351 @path[10.0.0.2, 4421]: 19166 00:23:33.351 @path[10.0.0.2, 4421]: 19199 00:23:33.351 @path[10.0.0.2, 4421]: 19132 00:23:33.351 @path[10.0.0.2, 4421]: 19145 00:23:33.351 18:39:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:33.351 18:39:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:33.351 18:39:40 -- host/multipath.sh@69 -- # sed -n 1p 00:23:33.351 18:39:40 -- host/multipath.sh@69 -- # port=4421 00:23:33.351 18:39:40 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.351 18:39:40 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.351 18:39:40 -- host/multipath.sh@72 -- # kill 98642 00:23:33.351 18:39:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.351 18:39:40 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:33.351 18:39:40 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:33.351 18:39:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.609 18:39:40 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:33.609 18:39:40 -- host/multipath.sh@65 -- # dtrace_pid=98772 00:23:33.609 18:39:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.609 18:39:40 -- host/multipath.sh@66 -- # sleep 6 00:23:40.163 18:39:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:40.163 18:39:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:40.163 18:39:47 -- host/multipath.sh@67 -- # active_port=4420 00:23:40.163 18:39:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.163 Attaching 4 probes... 00:23:40.163 @path[10.0.0.2, 4420]: 18932 00:23:40.163 @path[10.0.0.2, 4420]: 19245 00:23:40.163 @path[10.0.0.2, 4420]: 19228 00:23:40.163 @path[10.0.0.2, 4420]: 19268 00:23:40.163 @path[10.0.0.2, 4420]: 19269 00:23:40.163 18:39:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:40.163 18:39:47 -- host/multipath.sh@69 -- # sed -n 1p 00:23:40.163 18:39:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:40.163 18:39:47 -- host/multipath.sh@69 -- # port=4420 00:23:40.163 18:39:47 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:40.163 18:39:47 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:40.163 18:39:47 -- host/multipath.sh@72 -- # kill 98772 00:23:40.163 18:39:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.163 18:39:47 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:40.163 18:39:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:40.163 18:39:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.421 18:39:47 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:40.421 18:39:47 -- host/multipath.sh@65 -- # dtrace_pid=98907 00:23:40.421 18:39:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.421 18:39:47 -- host/multipath.sh@66 -- # sleep 6 00:23:47.059 18:39:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:47.059 18:39:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:47.059 18:39:53 -- host/multipath.sh@67 -- # active_port=4421 00:23:47.059 18:39:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.059 Attaching 4 probes... 00:23:47.059 @path[10.0.0.2, 4421]: 14701 00:23:47.059 @path[10.0.0.2, 4421]: 20558 00:23:47.059 @path[10.0.0.2, 4421]: 21542 00:23:47.059 @path[10.0.0.2, 4421]: 21444 00:23:47.059 @path[10.0.0.2, 4421]: 21402 00:23:47.059 18:39:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:47.060 18:39:53 -- host/multipath.sh@69 -- # sed -n 1p 00:23:47.060 18:39:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:47.060 18:39:53 -- host/multipath.sh@69 -- # port=4421 00:23:47.060 18:39:53 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.060 18:39:53 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.060 18:39:53 -- host/multipath.sh@72 -- # kill 98907 00:23:47.060 18:39:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.060 18:39:53 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:47.060 18:39:53 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:47.060 18:39:54 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:47.060 18:39:54 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:47.060 18:39:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:47.060 18:39:54 -- host/multipath.sh@65 -- # dtrace_pid=99039 00:23:47.060 18:39:54 -- host/multipath.sh@66 -- # sleep 6 00:23:53.610 18:40:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:53.610 18:40:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:53.610 18:40:00 -- host/multipath.sh@67 -- # active_port= 00:23:53.610 18:40:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.610 Attaching 4 probes... 00:23:53.610 00:23:53.610 00:23:53.610 00:23:53.610 00:23:53.610 00:23:53.610 18:40:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:53.610 18:40:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:53.610 18:40:00 -- host/multipath.sh@69 -- # sed -n 1p 00:23:53.610 18:40:00 -- host/multipath.sh@69 -- # port= 00:23:53.610 18:40:00 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:53.610 18:40:00 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:53.610 18:40:00 -- host/multipath.sh@72 -- # kill 99039 00:23:53.610 18:40:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.610 18:40:00 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:53.610 18:40:00 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:53.610 18:40:00 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.867 18:40:01 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:53.867 18:40:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:53.867 18:40:01 -- host/multipath.sh@65 -- # dtrace_pid=99169 00:23:53.867 18:40:01 -- host/multipath.sh@66 -- # sleep 6 00:24:00.418 18:40:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:00.418 18:40:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:00.418 18:40:07 -- host/multipath.sh@67 -- # active_port=4421 00:24:00.418 18:40:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.418 Attaching 4 probes... 00:24:00.418 @path[10.0.0.2, 4421]: 19011 00:24:00.418 @path[10.0.0.2, 4421]: 18971 00:24:00.418 @path[10.0.0.2, 4421]: 18928 00:24:00.418 @path[10.0.0.2, 4421]: 18714 00:24:00.418 @path[10.0.0.2, 4421]: 18534 00:24:00.418 18:40:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:00.418 18:40:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:00.418 18:40:07 -- host/multipath.sh@69 -- # sed -n 1p 00:24:00.418 18:40:07 -- host/multipath.sh@69 -- # port=4421 00:24:00.418 18:40:07 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.418 18:40:07 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.418 18:40:07 -- host/multipath.sh@72 -- # kill 99169 00:24:00.418 18:40:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.418 18:40:07 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:00.418 [2024-07-14 18:40:07.738937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.418 [2024-07-14 18:40:07.739210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 [2024-07-14 18:40:07.739652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901d90 is same with the state(5) to be set 00:24:00.419 18:40:07 -- host/multipath.sh@101 -- # sleep 1 00:24:01.350 18:40:08 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:01.350 18:40:08 -- host/multipath.sh@65 -- # dtrace_pid=99305 00:24:01.350 18:40:08 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.350 18:40:08 -- host/multipath.sh@66 -- # sleep 6 00:24:07.906 18:40:14 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:07.906 18:40:14 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.906 18:40:15 -- host/multipath.sh@67 -- # active_port=4420 00:24:07.906 18:40:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.906 Attaching 4 probes... 00:24:07.906 @path[10.0.0.2, 4420]: 19388 00:24:07.906 @path[10.0.0.2, 4420]: 18697 00:24:07.906 @path[10.0.0.2, 4420]: 18460 00:24:07.906 @path[10.0.0.2, 4420]: 18751 00:24:07.906 @path[10.0.0.2, 4420]: 18762 00:24:07.906 18:40:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.906 18:40:15 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.906 18:40:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.906 18:40:15 -- host/multipath.sh@69 -- # port=4420 00:24:07.906 18:40:15 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:07.906 18:40:15 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:07.906 18:40:15 -- host/multipath.sh@72 -- # kill 99305 00:24:07.906 18:40:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.906 18:40:15 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:07.906 [2024-07-14 18:40:15.310100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.164 18:40:15 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:08.422 18:40:15 -- host/multipath.sh@111 -- # sleep 6 00:24:14.978 18:40:21 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:14.978 18:40:21 -- host/multipath.sh@65 -- # dtrace_pid=99496 00:24:14.978 18:40:21 -- host/multipath.sh@66 -- # sleep 6 00:24:14.978 18:40:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98449 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:20.237 18:40:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:20.237 18:40:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:20.495 18:40:27 -- host/multipath.sh@67 -- # active_port=4421 00:24:20.495 18:40:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.495 Attaching 4 probes... 00:24:20.495 @path[10.0.0.2, 4421]: 18002 00:24:20.495 @path[10.0.0.2, 4421]: 19521 00:24:20.495 @path[10.0.0.2, 4421]: 20057 00:24:20.495 @path[10.0.0.2, 4421]: 19891 00:24:20.495 @path[10.0.0.2, 4421]: 19987 00:24:20.495 18:40:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:20.495 18:40:27 -- host/multipath.sh@69 -- # sed -n 1p 00:24:20.495 18:40:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:20.495 18:40:27 -- host/multipath.sh@69 -- # port=4421 00:24:20.495 18:40:27 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:20.495 18:40:27 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:20.495 18:40:27 -- host/multipath.sh@72 -- # kill 99496 00:24:20.495 18:40:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.495 18:40:27 -- host/multipath.sh@114 -- # killprocess 98553 00:24:20.495 18:40:27 -- common/autotest_common.sh@926 -- # '[' -z 98553 ']' 00:24:20.495 18:40:27 -- common/autotest_common.sh@930 -- # kill -0 98553 00:24:20.495 18:40:27 -- common/autotest_common.sh@931 -- # uname 00:24:20.495 18:40:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:20.495 18:40:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98553 00:24:20.757 18:40:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:20.757 18:40:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:20.757 killing process with pid 98553 00:24:20.757 18:40:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98553' 00:24:20.757 18:40:27 -- common/autotest_common.sh@945 -- # kill 98553 00:24:20.757 18:40:27 -- common/autotest_common.sh@950 -- # wait 98553 00:24:20.757 Connection closed with partial response: 00:24:20.757 00:24:20.757 00:24:20.757 18:40:28 -- host/multipath.sh@116 -- # wait 98553 00:24:20.757 18:40:28 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:20.757 [2024-07-14 18:39:30.736506] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:20.757 [2024-07-14 18:39:30.736615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98553 ] 00:24:20.757 [2024-07-14 18:39:30.868081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.757 [2024-07-14 18:39:30.972422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.757 Running I/O for 90 seconds... 00:24:20.757 [2024-07-14 18:39:40.920982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.921219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.921325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.921418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.921433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.757 [2024-07-14 18:39:40.922847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.922955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.757 [2024-07-14 18:39:40.922970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.757 [2024-07-14 18:39:40.923007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.923282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.923428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.923973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.923994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.758 [2024-07-14 18:39:40.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.758 [2024-07-14 18:39:40.924559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.758 [2024-07-14 18:39:40.924574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.924596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.924611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.924632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.924647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.925938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.925974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.925995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.759 [2024-07-14 18:39:40.926913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.926970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.926985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.759 [2024-07-14 18:39:40.927006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.759 [2024-07-14 18:39:40.927021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:40.927248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:40.927264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.442653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.442733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.442778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.442814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.442850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.442902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.442922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.442936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.443735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.443767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.443783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.444123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.444309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.760 [2024-07-14 18:39:47.444385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.760 [2024-07-14 18:39:47.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.760 [2024-07-14 18:39:47.444695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.444734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.444773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.444812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.444920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.444958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.444980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.444995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.445031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.445068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.445504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.445866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.445928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.445968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.445993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.446009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.761 [2024-07-14 18:39:47.446054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.761 [2024-07-14 18:39:47.446333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.761 [2024-07-14 18:39:47.446348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.446832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.446930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.446956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.447467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.447801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.447926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.447993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.448419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.448491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.762 [2024-07-14 18:39:47.448547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.762 [2024-07-14 18:39:47.448585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.762 [2024-07-14 18:39:47.448602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:47.448646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:47.448687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:47.448729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:47.448770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:47.448821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:47.448848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:47.448864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.449803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.449839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.449911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.449940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.450961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.451869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.451910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.451926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.763 [2024-07-14 18:39:54.452087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.452116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.452131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.452156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.452181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.763 [2024-07-14 18:39:54.452208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.763 [2024-07-14 18:39:54.452224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.452815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.452978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.452992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.453680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.453722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.453764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.453915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.453979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.764 [2024-07-14 18:39:54.453995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.764 [2024-07-14 18:39:54.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.764 [2024-07-14 18:39:54.454036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.765 [2024-07-14 18:39:54.454115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:39:54.454595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:39:54.454611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.765 [2024-07-14 18:40:07.739701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.765 [2024-07-14 18:40:07.739788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.739826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.765 [2024-07-14 18:40:07.739863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.765 [2024-07-14 18:40:07.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.739934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.765 [2024-07-14 18:40:07.740665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.765 [2024-07-14 18:40:07.740686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.740978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.740992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.741580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.741625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.741653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.741710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.741739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.741977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.741989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.766 [2024-07-14 18:40:07.742447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.766 [2024-07-14 18:40:07.742574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.766 [2024-07-14 18:40:07.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.742617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.742747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.742882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.742925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.742951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.742977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.742991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.767 [2024-07-14 18:40:07.743769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.767 [2024-07-14 18:40:07.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.767 [2024-07-14 18:40:07.743964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.743977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.743991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.768 [2024-07-14 18:40:07.744452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744738] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1756790 was disconnected and freed. reset controller. 00:24:20.768 [2024-07-14 18:40:07.744858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.768 [2024-07-14 18:40:07.744899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.768 [2024-07-14 18:40:07.744942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.768 [2024-07-14 18:40:07.744979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.744993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.768 [2024-07-14 18:40:07.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.745019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.768 [2024-07-14 18:40:07.745032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.768 [2024-07-14 18:40:07.745051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1923980 is same with the state(5) to be set 00:24:20.768 [2024-07-14 18:40:07.746425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.768 [2024-07-14 18:40:07.746465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1923980 (9): Bad file descriptor 00:24:20.768 [2024-07-14 18:40:07.746642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.768 [2024-07-14 18:40:07.746701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.768 [2024-07-14 18:40:07.746724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1923980 with addr=10.0.0.2, port=4421 00:24:20.768 [2024-07-14 18:40:07.746739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1923980 is same with the state(5) to be set 00:24:20.768 [2024-07-14 18:40:07.746772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1923980 (9): Bad file descriptor 00:24:20.768 [2024-07-14 18:40:07.746797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:20.768 [2024-07-14 18:40:07.746811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:20.768 [2024-07-14 18:40:07.746825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.768 [2024-07-14 18:40:07.746849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:20.768 [2024-07-14 18:40:07.746862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.768 [2024-07-14 18:40:17.808065] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.768 Received shutdown signal, test time was about 55.171990 seconds 00:24:20.768 00:24:20.768 Latency(us) 00:24:20.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.768 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.768 Verification LBA range: start 0x0 length 0x4000 00:24:20.768 Nvme0n1 : 55.17 11018.94 43.04 0.00 0.00 11599.28 303.48 7046430.72 00:24:20.768 =================================================================================================================== 00:24:20.768 Total : 11018.94 43.04 0.00 0.00 11599.28 303.48 7046430.72 00:24:20.768 18:40:28 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.026 18:40:28 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:21.026 18:40:28 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:21.026 18:40:28 -- host/multipath.sh@125 -- # nvmftestfini 00:24:21.026 18:40:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:21.026 18:40:28 -- nvmf/common.sh@116 -- # sync 00:24:21.026 18:40:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:21.026 18:40:28 -- nvmf/common.sh@119 -- # set +e 00:24:21.026 18:40:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:21.026 18:40:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:21.026 rmmod nvme_tcp 00:24:21.026 rmmod nvme_fabrics 00:24:21.027 rmmod nvme_keyring 00:24:21.027 18:40:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:21.285 18:40:28 -- nvmf/common.sh@123 -- # set -e 00:24:21.285 18:40:28 -- nvmf/common.sh@124 -- # return 0 00:24:21.285 18:40:28 -- nvmf/common.sh@477 -- # '[' -n 98449 ']' 00:24:21.285 18:40:28 -- nvmf/common.sh@478 -- # killprocess 98449 00:24:21.285 18:40:28 -- common/autotest_common.sh@926 -- # '[' -z 98449 ']' 00:24:21.285 18:40:28 -- common/autotest_common.sh@930 -- # kill -0 98449 00:24:21.285 18:40:28 -- common/autotest_common.sh@931 -- # uname 00:24:21.285 18:40:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:21.285 18:40:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98449 00:24:21.285 18:40:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:21.285 18:40:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:21.285 killing process with pid 98449 00:24:21.285 18:40:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98449' 00:24:21.285 18:40:28 -- common/autotest_common.sh@945 -- # kill 98449 00:24:21.285 18:40:28 -- common/autotest_common.sh@950 -- # wait 98449 00:24:21.544 18:40:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:21.544 18:40:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:21.544 18:40:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:21.544 18:40:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.544 18:40:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:21.544 18:40:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.544 18:40:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.544 18:40:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.544 18:40:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:21.544 00:24:21.544 real 1m0.962s 00:24:21.544 user 2m51.661s 00:24:21.544 sys 0m14.116s 00:24:21.544 18:40:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.544 18:40:28 -- common/autotest_common.sh@10 -- # set +x 00:24:21.544 ************************************ 00:24:21.544 END TEST nvmf_multipath 00:24:21.544 ************************************ 00:24:21.544 18:40:28 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:21.544 18:40:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:21.544 18:40:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:21.544 18:40:28 -- common/autotest_common.sh@10 -- # set +x 00:24:21.544 ************************************ 00:24:21.544 START TEST nvmf_timeout 00:24:21.544 ************************************ 00:24:21.544 18:40:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:21.544 * Looking for test storage... 00:24:21.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:21.544 18:40:28 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.544 18:40:28 -- nvmf/common.sh@7 -- # uname -s 00:24:21.544 18:40:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.544 18:40:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.544 18:40:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.544 18:40:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.544 18:40:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.544 18:40:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.544 18:40:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.544 18:40:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.544 18:40:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.544 18:40:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.544 18:40:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:24:21.544 18:40:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:24:21.544 18:40:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.544 18:40:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.544 18:40:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.544 18:40:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.544 18:40:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.544 18:40:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.544 18:40:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.544 18:40:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.544 18:40:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.545 18:40:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.545 18:40:28 -- paths/export.sh@5 -- # export PATH 00:24:21.545 18:40:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.545 18:40:28 -- nvmf/common.sh@46 -- # : 0 00:24:21.545 18:40:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:21.545 18:40:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:21.545 18:40:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:21.545 18:40:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.545 18:40:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.545 18:40:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:21.545 18:40:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:21.545 18:40:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:21.545 18:40:28 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.545 18:40:28 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.545 18:40:28 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.545 18:40:28 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:21.545 18:40:28 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.545 18:40:28 -- host/timeout.sh@19 -- # nvmftestinit 00:24:21.545 18:40:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:21.545 18:40:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.545 18:40:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:21.545 18:40:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:21.545 18:40:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:21.545 18:40:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.545 18:40:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.545 18:40:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.545 18:40:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:21.545 18:40:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:21.545 18:40:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:21.545 18:40:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:21.545 18:40:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:21.545 18:40:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:21.545 18:40:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.545 18:40:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.545 18:40:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:21.545 18:40:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:21.545 18:40:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.545 18:40:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.545 18:40:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.545 18:40:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.545 18:40:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.545 18:40:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.545 18:40:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.545 18:40:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.545 18:40:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:21.545 18:40:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:21.545 Cannot find device "nvmf_tgt_br" 00:24:21.545 18:40:28 -- nvmf/common.sh@154 -- # true 00:24:21.545 18:40:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.545 Cannot find device "nvmf_tgt_br2" 00:24:21.545 18:40:28 -- nvmf/common.sh@155 -- # true 00:24:21.545 18:40:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:21.545 18:40:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:21.545 Cannot find device "nvmf_tgt_br" 00:24:21.545 18:40:28 -- nvmf/common.sh@157 -- # true 00:24:21.545 18:40:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:21.803 Cannot find device "nvmf_tgt_br2" 00:24:21.803 18:40:28 -- nvmf/common.sh@158 -- # true 00:24:21.803 18:40:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:21.803 18:40:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:21.803 18:40:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.803 18:40:29 -- nvmf/common.sh@161 -- # true 00:24:21.803 18:40:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.803 18:40:29 -- nvmf/common.sh@162 -- # true 00:24:21.803 18:40:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:21.803 18:40:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:21.803 18:40:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:21.803 18:40:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:21.803 18:40:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:21.803 18:40:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:21.803 18:40:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:21.803 18:40:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:21.803 18:40:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:21.803 18:40:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:21.803 18:40:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:21.803 18:40:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:21.803 18:40:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:21.803 18:40:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:21.803 18:40:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:21.803 18:40:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:21.803 18:40:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:21.803 18:40:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:21.803 18:40:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:21.803 18:40:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:21.803 18:40:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:21.803 18:40:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:21.803 18:40:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:21.803 18:40:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:21.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:24:21.803 00:24:21.803 --- 10.0.0.2 ping statistics --- 00:24:21.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.804 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:21.804 18:40:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:22.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:22.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:22.063 00:24:22.063 --- 10.0.0.3 ping statistics --- 00:24:22.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.063 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:22.063 18:40:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:22.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:22.063 00:24:22.063 --- 10.0.0.1 ping statistics --- 00:24:22.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.063 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:22.063 18:40:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.063 18:40:29 -- nvmf/common.sh@421 -- # return 0 00:24:22.063 18:40:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:22.063 18:40:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.063 18:40:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:22.063 18:40:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:22.063 18:40:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.063 18:40:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:22.063 18:40:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:22.063 18:40:29 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:22.063 18:40:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:22.063 18:40:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:22.063 18:40:29 -- common/autotest_common.sh@10 -- # set +x 00:24:22.063 18:40:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:22.063 18:40:29 -- nvmf/common.sh@469 -- # nvmfpid=99816 00:24:22.063 18:40:29 -- nvmf/common.sh@470 -- # waitforlisten 99816 00:24:22.063 18:40:29 -- common/autotest_common.sh@819 -- # '[' -z 99816 ']' 00:24:22.063 18:40:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.063 18:40:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:22.063 18:40:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.063 18:40:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:22.063 18:40:29 -- common/autotest_common.sh@10 -- # set +x 00:24:22.063 [2024-07-14 18:40:29.307331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:22.063 [2024-07-14 18:40:29.307406] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.063 [2024-07-14 18:40:29.440405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.322 [2024-07-14 18:40:29.520628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:22.322 [2024-07-14 18:40:29.520787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.322 [2024-07-14 18:40:29.520799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.322 [2024-07-14 18:40:29.520807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.322 [2024-07-14 18:40:29.520929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.322 [2024-07-14 18:40:29.520937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.887 18:40:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:22.887 18:40:30 -- common/autotest_common.sh@852 -- # return 0 00:24:22.887 18:40:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:22.887 18:40:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:22.887 18:40:30 -- common/autotest_common.sh@10 -- # set +x 00:24:23.145 18:40:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.145 18:40:30 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.145 18:40:30 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.145 [2024-07-14 18:40:30.535408] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.145 18:40:30 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:23.721 Malloc0 00:24:23.721 18:40:30 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.721 18:40:31 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.979 18:40:31 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.236 [2024-07-14 18:40:31.590701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.236 18:40:31 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:24.236 18:40:31 -- host/timeout.sh@32 -- # bdevperf_pid=99907 00:24:24.236 18:40:31 -- host/timeout.sh@34 -- # waitforlisten 99907 /var/tmp/bdevperf.sock 00:24:24.236 18:40:31 -- common/autotest_common.sh@819 -- # '[' -z 99907 ']' 00:24:24.236 18:40:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.236 18:40:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:24.236 18:40:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.236 18:40:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:24.236 18:40:31 -- common/autotest_common.sh@10 -- # set +x 00:24:24.236 [2024-07-14 18:40:31.653255] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:24.236 [2024-07-14 18:40:31.653326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99907 ] 00:24:24.495 [2024-07-14 18:40:31.791799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.495 [2024-07-14 18:40:31.873470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.428 18:40:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:25.428 18:40:32 -- common/autotest_common.sh@852 -- # return 0 00:24:25.428 18:40:32 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:25.686 18:40:32 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:25.944 NVMe0n1 00:24:25.944 18:40:33 -- host/timeout.sh@51 -- # rpc_pid=99955 00:24:25.944 18:40:33 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.944 18:40:33 -- host/timeout.sh@53 -- # sleep 1 00:24:25.944 Running I/O for 10 seconds... 00:24:26.873 18:40:34 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.134 [2024-07-14 18:40:34.463370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.134 [2024-07-14 18:40:34.463718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.463959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aeb30 is same with the state(5) to be set 00:24:27.135 [2024-07-14 18:40:34.464345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.464986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.464995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.465003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.465012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.465020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.465030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.465037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.135 [2024-07-14 18:40:34.465047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.135 [2024-07-14 18:40:34.465055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.136 [2024-07-14 18:40:34.465918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.136 [2024-07-14 18:40:34.465927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.136 [2024-07-14 18:40:34.465935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.465944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.465952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.465962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.465979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.465987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.465996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.137 [2024-07-14 18:40:34.466579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.137 [2024-07-14 18:40:34.466768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.137 [2024-07-14 18:40:34.466777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.138 [2024-07-14 18:40:34.466967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.466976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc68a0 is same with the state(5) to be set 00:24:27.138 [2024-07-14 18:40:34.466986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:27.138 [2024-07-14 18:40:34.466993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:27.138 [2024-07-14 18:40:34.467000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117000 len:8 PRP1 0x0 PRP2 0x0 00:24:27.138 [2024-07-14 18:40:34.467007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.138 [2024-07-14 18:40:34.467058] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bc68a0 was disconnected and freed. reset controller. 00:24:27.138 [2024-07-14 18:40:34.467286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.138 [2024-07-14 18:40:34.467360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba85e0 (9): Bad file descriptor 00:24:27.138 [2024-07-14 18:40:34.467456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.138 [2024-07-14 18:40:34.467521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.138 [2024-07-14 18:40:34.467537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba85e0 with addr=10.0.0.2, port=4420 00:24:27.138 [2024-07-14 18:40:34.467546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba85e0 is same with the state(5) to be set 00:24:27.138 [2024-07-14 18:40:34.467624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba85e0 (9): Bad file descriptor 00:24:27.138 [2024-07-14 18:40:34.467640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.138 [2024-07-14 18:40:34.467649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.138 [2024-07-14 18:40:34.467659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.138 [2024-07-14 18:40:34.467678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.138 [2024-07-14 18:40:34.467688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.138 18:40:34 -- host/timeout.sh@56 -- # sleep 2 00:24:29.667 [2024-07-14 18:40:36.467961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.667 [2024-07-14 18:40:36.468069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.667 [2024-07-14 18:40:36.468088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba85e0 with addr=10.0.0.2, port=4420 00:24:29.667 [2024-07-14 18:40:36.468101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba85e0 is same with the state(5) to be set 00:24:29.667 [2024-07-14 18:40:36.468123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba85e0 (9): Bad file descriptor 00:24:29.667 [2024-07-14 18:40:36.468167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.667 [2024-07-14 18:40:36.468178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.667 [2024-07-14 18:40:36.468189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.667 [2024-07-14 18:40:36.468216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.667 [2024-07-14 18:40:36.468227] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.667 18:40:36 -- host/timeout.sh@57 -- # get_controller 00:24:29.667 18:40:36 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.667 18:40:36 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:29.667 18:40:36 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:29.667 18:40:36 -- host/timeout.sh@58 -- # get_bdev 00:24:29.667 18:40:36 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:29.667 18:40:36 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:29.667 18:40:37 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:29.667 18:40:37 -- host/timeout.sh@61 -- # sleep 5 00:24:31.573 [2024-07-14 18:40:38.468595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.573 [2024-07-14 18:40:38.468732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.573 [2024-07-14 18:40:38.468751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba85e0 with addr=10.0.0.2, port=4420 00:24:31.573 [2024-07-14 18:40:38.468763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba85e0 is same with the state(5) to be set 00:24:31.573 [2024-07-14 18:40:38.468789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba85e0 (9): Bad file descriptor 00:24:31.573 [2024-07-14 18:40:38.468807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.573 [2024-07-14 18:40:38.468816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.573 [2024-07-14 18:40:38.468826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.573 [2024-07-14 18:40:38.468887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.573 [2024-07-14 18:40:38.468900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.551 [2024-07-14 18:40:40.468930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.551 [2024-07-14 18:40:40.468986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.551 [2024-07-14 18:40:40.469007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.551 [2024-07-14 18:40:40.469016] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:33.551 [2024-07-14 18:40:40.469051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.116 00:24:34.116 Latency(us) 00:24:34.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.116 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.116 Verification LBA range: start 0x0 length 0x4000 00:24:34.116 NVMe0n1 : 8.13 1789.74 6.99 15.74 0.00 70782.86 2591.65 7015926.69 00:24:34.116 =================================================================================================================== 00:24:34.116 Total : 1789.74 6.99 15.74 0.00 70782.86 2591.65 7015926.69 00:24:34.116 0 00:24:34.683 18:40:42 -- host/timeout.sh@62 -- # get_controller 00:24:34.683 18:40:42 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.683 18:40:42 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:34.941 18:40:42 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:34.941 18:40:42 -- host/timeout.sh@63 -- # get_bdev 00:24:34.941 18:40:42 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:34.941 18:40:42 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:35.200 18:40:42 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:35.200 18:40:42 -- host/timeout.sh@65 -- # wait 99955 00:24:35.200 18:40:42 -- host/timeout.sh@67 -- # killprocess 99907 00:24:35.200 18:40:42 -- common/autotest_common.sh@926 -- # '[' -z 99907 ']' 00:24:35.200 18:40:42 -- common/autotest_common.sh@930 -- # kill -0 99907 00:24:35.200 18:40:42 -- common/autotest_common.sh@931 -- # uname 00:24:35.200 18:40:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:35.200 18:40:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99907 00:24:35.200 18:40:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:35.200 killing process with pid 99907 00:24:35.200 18:40:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:35.200 18:40:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99907' 00:24:35.201 Received shutdown signal, test time was about 9.190482 seconds 00:24:35.201 00:24:35.201 Latency(us) 00:24:35.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.201 =================================================================================================================== 00:24:35.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.201 18:40:42 -- common/autotest_common.sh@945 -- # kill 99907 00:24:35.201 18:40:42 -- common/autotest_common.sh@950 -- # wait 99907 00:24:35.460 18:40:42 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.731 [2024-07-14 18:40:42.974212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.731 18:40:42 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:35.731 18:40:42 -- host/timeout.sh@74 -- # bdevperf_pid=100107 00:24:35.731 18:40:42 -- host/timeout.sh@76 -- # waitforlisten 100107 /var/tmp/bdevperf.sock 00:24:35.731 18:40:42 -- common/autotest_common.sh@819 -- # '[' -z 100107 ']' 00:24:35.731 18:40:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.731 18:40:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:35.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.731 18:40:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.731 18:40:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:35.731 18:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:35.731 [2024-07-14 18:40:43.045875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:35.731 [2024-07-14 18:40:43.045968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100107 ] 00:24:35.990 [2024-07-14 18:40:43.187224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.990 [2024-07-14 18:40:43.265976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.922 18:40:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:36.922 18:40:43 -- common/autotest_common.sh@852 -- # return 0 00:24:36.922 18:40:43 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:36.922 18:40:44 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:37.181 NVMe0n1 00:24:37.181 18:40:44 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.181 18:40:44 -- host/timeout.sh@84 -- # rpc_pid=100155 00:24:37.181 18:40:44 -- host/timeout.sh@86 -- # sleep 1 00:24:37.181 Running I/O for 10 seconds... 00:24:38.160 18:40:45 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.420 [2024-07-14 18:40:45.695093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547a0 is same with the state(5) to be set 00:24:38.420 [2024-07-14 18:40:45.695707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.420 [2024-07-14 18:40:45.695966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.420 [2024-07-14 18:40:45.695974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.695984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.421 [2024-07-14 18:40:45.696787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.421 [2024-07-14 18:40:45.696797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.421 [2024-07-14 18:40:45.696805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.696841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.696860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.696986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.696995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.422 [2024-07-14 18:40:45.697543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.422 [2024-07-14 18:40:45.697637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.422 [2024-07-14 18:40:45.697646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.697782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.697800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.697863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.697915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.697989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.697997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.698014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.698070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.698104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.698121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.423 [2024-07-14 18:40:45.698138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.423 [2024-07-14 18:40:45.698269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1865780 is same with the state(5) to be set 00:24:38.423 [2024-07-14 18:40:45.698289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.423 [2024-07-14 18:40:45.698295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.423 [2024-07-14 18:40:45.698303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121008 len:8 PRP1 0x0 PRP2 0x0 00:24:38.423 [2024-07-14 18:40:45.698311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.423 [2024-07-14 18:40:45.698361] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1865780 was disconnected and freed. reset controller. 00:24:38.423 [2024-07-14 18:40:45.698633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.423 [2024-07-14 18:40:45.698703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:38.423 [2024-07-14 18:40:45.698803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-14 18:40:45.698850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-14 18:40:45.698881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:38.423 [2024-07-14 18:40:45.698891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:38.423 [2024-07-14 18:40:45.698938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:38.423 [2024-07-14 18:40:45.698983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.423 [2024-07-14 18:40:45.698994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.423 [2024-07-14 18:40:45.699003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.423 [2024-07-14 18:40:45.699022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.423 [2024-07-14 18:40:45.699033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.423 18:40:45 -- host/timeout.sh@90 -- # sleep 1 00:24:39.359 [2024-07-14 18:40:46.699166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.359 [2024-07-14 18:40:46.699256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.359 [2024-07-14 18:40:46.699273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:39.359 [2024-07-14 18:40:46.699285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:39.359 [2024-07-14 18:40:46.699308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:39.359 [2024-07-14 18:40:46.699335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.359 [2024-07-14 18:40:46.699346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.359 [2024-07-14 18:40:46.699356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.359 [2024-07-14 18:40:46.699381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.359 [2024-07-14 18:40:46.699392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.359 18:40:46 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.617 [2024-07-14 18:40:46.953713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.617 18:40:46 -- host/timeout.sh@92 -- # wait 100155 00:24:40.551 [2024-07-14 18:40:47.716601] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.668 00:24:48.668 Latency(us) 00:24:48.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.668 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:48.668 Verification LBA range: start 0x0 length 0x4000 00:24:48.668 NVMe0n1 : 10.01 9890.93 38.64 0.00 0.00 12921.35 1392.64 3019898.88 00:24:48.668 =================================================================================================================== 00:24:48.668 Total : 9890.93 38.64 0.00 0.00 12921.35 1392.64 3019898.88 00:24:48.668 0 00:24:48.668 18:40:54 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.668 18:40:54 -- host/timeout.sh@97 -- # rpc_pid=100272 00:24:48.668 18:40:54 -- host/timeout.sh@98 -- # sleep 1 00:24:48.668 Running I/O for 10 seconds... 00:24:48.668 18:40:55 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.668 [2024-07-14 18:40:55.858401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.858994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.859004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af7b0 is same with the state(5) to be set 00:24:48.668 [2024-07-14 18:40:55.859497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.668 [2024-07-14 18:40:55.859549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.669 [2024-07-14 18:40:55.859804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.859990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.859998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.669 [2024-07-14 18:40:55.860203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.669 [2024-07-14 18:40:55.860261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.669 [2024-07-14 18:40:55.860272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.669 [2024-07-14 18:40:55.860281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.670 [2024-07-14 18:40:55.860906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.670 [2024-07-14 18:40:55.860926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.670 [2024-07-14 18:40:55.860937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.860946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.860956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.860964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.860974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.860983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.860994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.671 [2024-07-14 18:40:55.861639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.671 [2024-07-14 18:40:55.861650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.671 [2024-07-14 18:40:55.861659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.861922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.861989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.672 [2024-07-14 18:40:55.861998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-14 18:40:55.862160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1875e30 is same with the state(5) to be set 00:24:48.672 [2024-07-14 18:40:55.862188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.672 [2024-07-14 18:40:55.862195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.672 [2024-07-14 18:40:55.862204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125088 len:8 PRP1 0x0 PRP2 0x0 00:24:48.672 [2024-07-14 18:40:55.862213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.672 [2024-07-14 18:40:55.862266] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1875e30 was disconnected and freed. reset controller. 00:24:48.672 [2024-07-14 18:40:55.862512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.672 [2024-07-14 18:40:55.862617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:48.672 [2024-07-14 18:40:55.862722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.672 [2024-07-14 18:40:55.862771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.672 [2024-07-14 18:40:55.862786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:48.672 [2024-07-14 18:40:55.862797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:48.672 [2024-07-14 18:40:55.862821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:48.672 [2024-07-14 18:40:55.862836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.672 [2024-07-14 18:40:55.862845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.672 [2024-07-14 18:40:55.862855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.672 [2024-07-14 18:40:55.862875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.672 [2024-07-14 18:40:55.862886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.672 18:40:55 -- host/timeout.sh@101 -- # sleep 3 00:24:49.609 [2024-07-14 18:40:56.862994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.609 [2024-07-14 18:40:56.863105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.609 [2024-07-14 18:40:56.863124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:49.609 [2024-07-14 18:40:56.863136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:49.609 [2024-07-14 18:40:56.863159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:49.609 [2024-07-14 18:40:56.863177] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.609 [2024-07-14 18:40:56.863185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.609 [2024-07-14 18:40:56.863195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.609 [2024-07-14 18:40:56.863222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.609 [2024-07-14 18:40:56.863233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.545 [2024-07-14 18:40:57.863360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-07-14 18:40:57.863463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.545 [2024-07-14 18:40:57.863480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:50.545 [2024-07-14 18:40:57.863493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:50.545 [2024-07-14 18:40:57.863528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:50.545 [2024-07-14 18:40:57.863548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.545 [2024-07-14 18:40:57.863566] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.545 [2024-07-14 18:40:57.863605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.545 [2024-07-14 18:40:57.863634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.545 [2024-07-14 18:40:57.863646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.480 [2024-07-14 18:40:58.865889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.480 [2024-07-14 18:40:58.865998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.481 [2024-07-14 18:40:58.866016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18475e0 with addr=10.0.0.2, port=4420 00:24:51.481 [2024-07-14 18:40:58.866028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18475e0 is same with the state(5) to be set 00:24:51.481 [2024-07-14 18:40:58.866217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18475e0 (9): Bad file descriptor 00:24:51.481 [2024-07-14 18:40:58.866410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.481 [2024-07-14 18:40:58.866423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.481 [2024-07-14 18:40:58.866433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.481 [2024-07-14 18:40:58.868858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.481 [2024-07-14 18:40:58.868902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.481 18:40:58 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.738 [2024-07-14 18:40:59.137218] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.738 18:40:59 -- host/timeout.sh@103 -- # wait 100272 00:24:52.690 [2024-07-14 18:40:59.896932] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.959 00:24:57.959 Latency(us) 00:24:57.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.959 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:57.959 Verification LBA range: start 0x0 length 0x4000 00:24:57.959 NVMe0n1 : 10.01 7545.74 29.48 5931.20 0.00 9482.35 878.78 3019898.88 00:24:57.959 =================================================================================================================== 00:24:57.959 Total : 7545.74 29.48 5931.20 0.00 9482.35 0.00 3019898.88 00:24:57.959 0 00:24:57.959 18:41:04 -- host/timeout.sh@105 -- # killprocess 100107 00:24:57.959 18:41:04 -- common/autotest_common.sh@926 -- # '[' -z 100107 ']' 00:24:57.959 18:41:04 -- common/autotest_common.sh@930 -- # kill -0 100107 00:24:57.959 18:41:04 -- common/autotest_common.sh@931 -- # uname 00:24:57.959 18:41:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.959 18:41:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100107 00:24:57.959 killing process with pid 100107 00:24:57.959 Received shutdown signal, test time was about 10.000000 seconds 00:24:57.959 00:24:57.959 Latency(us) 00:24:57.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.959 =================================================================================================================== 00:24:57.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.959 18:41:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:57.959 18:41:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:57.959 18:41:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100107' 00:24:57.959 18:41:04 -- common/autotest_common.sh@945 -- # kill 100107 00:24:57.959 18:41:04 -- common/autotest_common.sh@950 -- # wait 100107 00:24:57.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.959 18:41:05 -- host/timeout.sh@110 -- # bdevperf_pid=100398 00:24:57.959 18:41:05 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:57.959 18:41:05 -- host/timeout.sh@112 -- # waitforlisten 100398 /var/tmp/bdevperf.sock 00:24:57.959 18:41:05 -- common/autotest_common.sh@819 -- # '[' -z 100398 ']' 00:24:57.959 18:41:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.959 18:41:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:57.959 18:41:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.959 18:41:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:57.959 18:41:05 -- common/autotest_common.sh@10 -- # set +x 00:24:57.959 [2024-07-14 18:41:05.044071] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:57.959 [2024-07-14 18:41:05.044158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100398 ] 00:24:57.959 [2024-07-14 18:41:05.177190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.959 [2024-07-14 18:41:05.259066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.896 18:41:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:58.896 18:41:05 -- common/autotest_common.sh@852 -- # return 0 00:24:58.896 18:41:05 -- host/timeout.sh@116 -- # dtrace_pid=100426 00:24:58.896 18:41:05 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100398 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:58.896 18:41:05 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:58.896 18:41:06 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:59.463 NVMe0n1 00:24:59.463 18:41:06 -- host/timeout.sh@124 -- # rpc_pid=100479 00:24:59.463 18:41:06 -- host/timeout.sh@125 -- # sleep 1 00:24:59.463 18:41:06 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.463 Running I/O for 10 seconds... 00:25:00.399 18:41:07 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.660 [2024-07-14 18:41:07.829648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.660 [2024-07-14 18:41:07.829972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.829979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.829986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.829993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.829999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3c20 is same with the state(5) to be set 00:25:00.661 [2024-07-14 18:41:07.830933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.830996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.831007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.831019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.831029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.831040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.831049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.831068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.831079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.661 [2024-07-14 18:41:07.831088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.661 [2024-07-14 18:41:07.831099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.831977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.831991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.662 [2024-07-14 18:41:07.832018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.662 [2024-07-14 18:41:07.832027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.663 [2024-07-14 18:41:07.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.663 [2024-07-14 18:41:07.832933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.832941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.832951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.832960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.832970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.832979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.832990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.664 [2024-07-14 18:41:07.833644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.664 [2024-07-14 18:41:07.833653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.665 [2024-07-14 18:41:07.833678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e98a0 is same with the state(5) to be set 00:25:00.665 [2024-07-14 18:41:07.833705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:00.665 [2024-07-14 18:41:07.833713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:00.665 [2024-07-14 18:41:07.833722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:8 PRP1 0x0 PRP2 0x0 00:25:00.665 [2024-07-14 18:41:07.833731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833784] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9e98a0 was disconnected and freed. reset controller. 00:25:00.665 [2024-07-14 18:41:07.833874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.665 [2024-07-14 18:41:07.833891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.665 [2024-07-14 18:41:07.833910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.665 [2024-07-14 18:41:07.833929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.665 [2024-07-14 18:41:07.833948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.665 [2024-07-14 18:41:07.833957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb5e0 is same with the state(5) to be set 00:25:00.665 [2024-07-14 18:41:07.834208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.665 [2024-07-14 18:41:07.834240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb5e0 (9): Bad file descriptor 00:25:00.665 [2024-07-14 18:41:07.834347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.665 [2024-07-14 18:41:07.834405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.665 [2024-07-14 18:41:07.834422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb5e0 with addr=10.0.0.2, port=4420 00:25:00.665 [2024-07-14 18:41:07.834433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb5e0 is same with the state(5) to be set 00:25:00.665 [2024-07-14 18:41:07.834452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb5e0 (9): Bad file descriptor 00:25:00.665 [2024-07-14 18:41:07.834468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.665 [2024-07-14 18:41:07.834477] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.665 [2024-07-14 18:41:07.834510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.665 [2024-07-14 18:41:07.846611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.665 [2024-07-14 18:41:07.846648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.665 18:41:07 -- host/timeout.sh@128 -- # wait 100479 00:25:02.568 [2024-07-14 18:41:09.846828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.568 [2024-07-14 18:41:09.846956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.568 [2024-07-14 18:41:09.846977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb5e0 with addr=10.0.0.2, port=4420 00:25:02.568 [2024-07-14 18:41:09.846991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb5e0 is same with the state(5) to be set 00:25:02.568 [2024-07-14 18:41:09.847020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb5e0 (9): Bad file descriptor 00:25:02.568 [2024-07-14 18:41:09.847041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.568 [2024-07-14 18:41:09.847051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.568 [2024-07-14 18:41:09.847062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.568 [2024-07-14 18:41:09.847090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.568 [2024-07-14 18:41:09.847102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.470 [2024-07-14 18:41:11.847324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-14 18:41:11.847445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-14 18:41:11.847466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb5e0 with addr=10.0.0.2, port=4420 00:25:04.470 [2024-07-14 18:41:11.847480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb5e0 is same with the state(5) to be set 00:25:04.470 [2024-07-14 18:41:11.847535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb5e0 (9): Bad file descriptor 00:25:04.470 [2024-07-14 18:41:11.847566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.470 [2024-07-14 18:41:11.847582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.470 [2024-07-14 18:41:11.847593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.470 [2024-07-14 18:41:11.847622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.470 [2024-07-14 18:41:11.847648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.001 [2024-07-14 18:41:13.847709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.001 [2024-07-14 18:41:13.847782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.001 [2024-07-14 18:41:13.847810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.001 [2024-07-14 18:41:13.847820] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:07.001 [2024-07-14 18:41:13.847848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.568 00:25:07.569 Latency(us) 00:25:07.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.569 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:07.569 NVMe0n1 : 8.14 2737.82 10.69 15.72 0.00 46424.03 3366.17 7046430.72 00:25:07.569 =================================================================================================================== 00:25:07.569 Total : 2737.82 10.69 15.72 0.00 46424.03 3366.17 7046430.72 00:25:07.569 0 00:25:07.569 18:41:14 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:07.569 Attaching 5 probes... 00:25:07.569 1261.287465: reset bdev controller NVMe0 00:25:07.569 1261.362594: reconnect bdev controller NVMe0 00:25:07.569 3273.770381: reconnect delay bdev controller NVMe0 00:25:07.569 3273.806278: reconnect bdev controller NVMe0 00:25:07.569 5274.218643: reconnect delay bdev controller NVMe0 00:25:07.569 5274.292583: reconnect bdev controller NVMe0 00:25:07.569 7274.769403: reconnect delay bdev controller NVMe0 00:25:07.569 7274.790345: reconnect bdev controller NVMe0 00:25:07.569 18:41:14 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:07.569 18:41:14 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:07.569 18:41:14 -- host/timeout.sh@136 -- # kill 100426 00:25:07.569 18:41:14 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:07.569 18:41:14 -- host/timeout.sh@139 -- # killprocess 100398 00:25:07.569 18:41:14 -- common/autotest_common.sh@926 -- # '[' -z 100398 ']' 00:25:07.569 18:41:14 -- common/autotest_common.sh@930 -- # kill -0 100398 00:25:07.569 18:41:14 -- common/autotest_common.sh@931 -- # uname 00:25:07.569 18:41:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.569 18:41:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100398 00:25:07.569 killing process with pid 100398 00:25:07.569 Received shutdown signal, test time was about 8.205382 seconds 00:25:07.569 00:25:07.569 Latency(us) 00:25:07.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.569 =================================================================================================================== 00:25:07.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.569 18:41:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:07.569 18:41:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:07.569 18:41:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100398' 00:25:07.569 18:41:14 -- common/autotest_common.sh@945 -- # kill 100398 00:25:07.569 18:41:14 -- common/autotest_common.sh@950 -- # wait 100398 00:25:07.828 18:41:15 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.087 18:41:15 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:08.087 18:41:15 -- host/timeout.sh@145 -- # nvmftestfini 00:25:08.087 18:41:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:08.087 18:41:15 -- nvmf/common.sh@116 -- # sync 00:25:08.087 18:41:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:08.087 18:41:15 -- nvmf/common.sh@119 -- # set +e 00:25:08.087 18:41:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:08.087 18:41:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:08.087 rmmod nvme_tcp 00:25:08.087 rmmod nvme_fabrics 00:25:08.087 rmmod nvme_keyring 00:25:08.087 18:41:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:08.087 18:41:15 -- nvmf/common.sh@123 -- # set -e 00:25:08.087 18:41:15 -- nvmf/common.sh@124 -- # return 0 00:25:08.087 18:41:15 -- nvmf/common.sh@477 -- # '[' -n 99816 ']' 00:25:08.087 18:41:15 -- nvmf/common.sh@478 -- # killprocess 99816 00:25:08.087 18:41:15 -- common/autotest_common.sh@926 -- # '[' -z 99816 ']' 00:25:08.087 18:41:15 -- common/autotest_common.sh@930 -- # kill -0 99816 00:25:08.087 18:41:15 -- common/autotest_common.sh@931 -- # uname 00:25:08.087 18:41:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.087 18:41:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99816 00:25:08.087 killing process with pid 99816 00:25:08.087 18:41:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:08.087 18:41:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:08.087 18:41:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99816' 00:25:08.087 18:41:15 -- common/autotest_common.sh@945 -- # kill 99816 00:25:08.087 18:41:15 -- common/autotest_common.sh@950 -- # wait 99816 00:25:08.345 18:41:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:08.345 18:41:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:08.345 18:41:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:08.345 18:41:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.345 18:41:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:08.345 18:41:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.345 18:41:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.345 18:41:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.345 18:41:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:08.345 00:25:08.345 real 0m46.949s 00:25:08.345 user 2m17.476s 00:25:08.345 sys 0m5.338s 00:25:08.345 18:41:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.345 18:41:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 ************************************ 00:25:08.345 END TEST nvmf_timeout 00:25:08.345 ************************************ 00:25:08.665 18:41:15 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:08.665 18:41:15 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:08.665 18:41:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:08.665 18:41:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.665 18:41:15 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:08.665 00:25:08.665 real 17m9.842s 00:25:08.665 user 54m37.749s 00:25:08.665 sys 3m45.243s 00:25:08.665 18:41:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.665 ************************************ 00:25:08.665 18:41:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.665 END TEST nvmf_tcp 00:25:08.665 ************************************ 00:25:08.665 18:41:15 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:25:08.665 18:41:15 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.665 18:41:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:08.665 18:41:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.665 18:41:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.665 ************************************ 00:25:08.665 START TEST spdkcli_nvmf_tcp 00:25:08.665 ************************************ 00:25:08.665 18:41:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.665 * Looking for test storage... 00:25:08.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:08.665 18:41:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:08.665 18:41:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:08.665 18:41:15 -- nvmf/common.sh@7 -- # uname -s 00:25:08.665 18:41:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.665 18:41:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.665 18:41:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.665 18:41:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.665 18:41:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.665 18:41:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.665 18:41:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.665 18:41:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.665 18:41:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.665 18:41:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.665 18:41:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:25:08.665 18:41:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:25:08.665 18:41:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.665 18:41:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.665 18:41:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:08.665 18:41:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.665 18:41:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.665 18:41:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.665 18:41:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.665 18:41:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.665 18:41:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.665 18:41:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.665 18:41:15 -- paths/export.sh@5 -- # export PATH 00:25:08.665 18:41:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.665 18:41:15 -- nvmf/common.sh@46 -- # : 0 00:25:08.665 18:41:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:08.665 18:41:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:08.665 18:41:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:08.665 18:41:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.665 18:41:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.665 18:41:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:08.665 18:41:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:08.665 18:41:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:08.665 18:41:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:08.665 18:41:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.665 18:41:15 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:08.665 18:41:15 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100698 00:25:08.665 18:41:16 -- spdkcli/common.sh@34 -- # waitforlisten 100698 00:25:08.665 18:41:16 -- common/autotest_common.sh@819 -- # '[' -z 100698 ']' 00:25:08.665 18:41:16 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:08.665 18:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.665 18:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:08.665 18:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.665 18:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:08.665 18:41:16 -- common/autotest_common.sh@10 -- # set +x 00:25:08.972 [2024-07-14 18:41:16.059958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:08.972 [2024-07-14 18:41:16.060050] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100698 ] 00:25:08.972 [2024-07-14 18:41:16.201216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:08.972 [2024-07-14 18:41:16.274718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:08.972 [2024-07-14 18:41:16.274979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.972 [2024-07-14 18:41:16.275134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.909 18:41:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.909 18:41:16 -- common/autotest_common.sh@852 -- # return 0 00:25:09.909 18:41:16 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:09.909 18:41:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.909 18:41:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.909 18:41:17 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:09.909 18:41:17 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:09.909 18:41:17 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:09.909 18:41:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.909 18:41:17 -- common/autotest_common.sh@10 -- # set +x 00:25:09.909 18:41:17 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:09.909 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:09.909 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:09.909 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:09.909 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:09.909 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:09.909 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:09.909 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:09.909 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:09.909 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:09.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:09.909 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:09.909 ' 00:25:10.167 [2024-07-14 18:41:17.487322] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:12.700 [2024-07-14 18:41:19.725764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.635 [2024-07-14 18:41:21.000004] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:16.165 [2024-07-14 18:41:23.363965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:18.064 [2024-07-14 18:41:25.407507] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:19.964 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:19.964 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:19.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:19.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:19.964 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:19.964 18:41:27 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:19.964 18:41:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:19.964 18:41:27 -- common/autotest_common.sh@10 -- # set +x 00:25:19.964 18:41:27 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:19.964 18:41:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:19.964 18:41:27 -- common/autotest_common.sh@10 -- # set +x 00:25:19.964 18:41:27 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:19.964 18:41:27 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:20.222 18:41:27 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:20.222 18:41:27 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:20.222 18:41:27 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:20.222 18:41:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.222 18:41:27 -- common/autotest_common.sh@10 -- # set +x 00:25:20.480 18:41:27 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:20.480 18:41:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.480 18:41:27 -- common/autotest_common.sh@10 -- # set +x 00:25:20.480 18:41:27 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:20.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:20.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:20.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:20.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:20.480 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:20.480 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:20.480 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:20.480 ' 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:25.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:25.746 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:25.746 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:25.746 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:25.746 18:41:33 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:25.746 18:41:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:25.746 18:41:33 -- common/autotest_common.sh@10 -- # set +x 00:25:25.746 18:41:33 -- spdkcli/nvmf.sh@90 -- # killprocess 100698 00:25:25.746 18:41:33 -- common/autotest_common.sh@926 -- # '[' -z 100698 ']' 00:25:25.746 18:41:33 -- common/autotest_common.sh@930 -- # kill -0 100698 00:25:25.746 18:41:33 -- common/autotest_common.sh@931 -- # uname 00:25:25.746 18:41:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.746 18:41:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100698 00:25:25.746 18:41:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:25.746 18:41:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:25.746 killing process with pid 100698 00:25:25.746 18:41:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100698' 00:25:25.746 18:41:33 -- common/autotest_common.sh@945 -- # kill 100698 00:25:25.746 [2024-07-14 18:41:33.140056] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:25.746 18:41:33 -- common/autotest_common.sh@950 -- # wait 100698 00:25:26.003 18:41:33 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:26.003 18:41:33 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:26.003 18:41:33 -- spdkcli/common.sh@13 -- # '[' -n 100698 ']' 00:25:26.003 18:41:33 -- spdkcli/common.sh@14 -- # killprocess 100698 00:25:26.003 18:41:33 -- common/autotest_common.sh@926 -- # '[' -z 100698 ']' 00:25:26.003 18:41:33 -- common/autotest_common.sh@930 -- # kill -0 100698 00:25:26.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100698) - No such process 00:25:26.003 Process with pid 100698 is not found 00:25:26.003 18:41:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100698 is not found' 00:25:26.003 18:41:33 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:26.003 18:41:33 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:26.003 18:41:33 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:26.003 00:25:26.003 real 0m17.449s 00:25:26.003 user 0m37.485s 00:25:26.003 sys 0m0.986s 00:25:26.003 18:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.003 18:41:33 -- common/autotest_common.sh@10 -- # set +x 00:25:26.003 ************************************ 00:25:26.003 END TEST spdkcli_nvmf_tcp 00:25:26.003 ************************************ 00:25:26.003 18:41:33 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:26.003 18:41:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:26.003 18:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:26.003 18:41:33 -- common/autotest_common.sh@10 -- # set +x 00:25:26.003 ************************************ 00:25:26.003 START TEST nvmf_identify_passthru 00:25:26.003 ************************************ 00:25:26.003 18:41:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:26.261 * Looking for test storage... 00:25:26.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:26.261 18:41:33 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:26.261 18:41:33 -- nvmf/common.sh@7 -- # uname -s 00:25:26.261 18:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.261 18:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.261 18:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.261 18:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.261 18:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.261 18:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.261 18:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.261 18:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.261 18:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.261 18:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:25:26.261 18:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:25:26.261 18:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.261 18:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.261 18:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:26.261 18:41:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:26.261 18:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.261 18:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.261 18:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.261 18:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@5 -- # export PATH 00:25:26.261 18:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- nvmf/common.sh@46 -- # : 0 00:25:26.261 18:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:26.261 18:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:26.261 18:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:26.261 18:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.261 18:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.261 18:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:26.261 18:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:26.261 18:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:26.261 18:41:33 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:26.261 18:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.261 18:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.261 18:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.261 18:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- paths/export.sh@5 -- # export PATH 00:25:26.261 18:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.261 18:41:33 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:26.261 18:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:26.261 18:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.261 18:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:26.261 18:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:26.261 18:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:26.261 18:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.261 18:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:26.261 18:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.261 18:41:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:26.261 18:41:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:26.261 18:41:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.261 18:41:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.261 18:41:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:26.261 18:41:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:26.261 18:41:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:26.261 18:41:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:26.261 18:41:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:26.261 18:41:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.261 18:41:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:26.261 18:41:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:26.261 18:41:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:26.261 18:41:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:26.261 18:41:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:26.261 18:41:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:26.261 Cannot find device "nvmf_tgt_br" 00:25:26.261 18:41:33 -- nvmf/common.sh@154 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:26.261 Cannot find device "nvmf_tgt_br2" 00:25:26.261 18:41:33 -- nvmf/common.sh@155 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:26.261 18:41:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:26.261 Cannot find device "nvmf_tgt_br" 00:25:26.261 18:41:33 -- nvmf/common.sh@157 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:26.261 Cannot find device "nvmf_tgt_br2" 00:25:26.261 18:41:33 -- nvmf/common.sh@158 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:26.261 18:41:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:26.261 18:41:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:26.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.261 18:41:33 -- nvmf/common.sh@161 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:26.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.261 18:41:33 -- nvmf/common.sh@162 -- # true 00:25:26.261 18:41:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:26.261 18:41:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:26.261 18:41:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:26.261 18:41:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:26.261 18:41:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:26.261 18:41:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:26.261 18:41:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:26.261 18:41:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:26.261 18:41:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:26.519 18:41:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:26.519 18:41:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:26.519 18:41:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:26.519 18:41:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:26.519 18:41:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:26.519 18:41:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:26.519 18:41:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:26.519 18:41:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:26.519 18:41:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:26.519 18:41:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:26.519 18:41:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:26.519 18:41:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:26.519 18:41:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:26.519 18:41:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:26.519 18:41:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:26.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:26.520 00:25:26.520 --- 10.0.0.2 ping statistics --- 00:25:26.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.520 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:26.520 18:41:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:26.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:26.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:26.520 00:25:26.520 --- 10.0.0.3 ping statistics --- 00:25:26.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.520 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:26.520 18:41:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:26.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:26.520 00:25:26.520 --- 10.0.0.1 ping statistics --- 00:25:26.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.520 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:26.520 18:41:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.520 18:41:33 -- nvmf/common.sh@421 -- # return 0 00:25:26.520 18:41:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:26.520 18:41:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.520 18:41:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:26.520 18:41:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:26.520 18:41:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.520 18:41:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:26.520 18:41:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:26.520 18:41:33 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:26.520 18:41:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:26.520 18:41:33 -- common/autotest_common.sh@10 -- # set +x 00:25:26.520 18:41:33 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:26.520 18:41:33 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:26.520 18:41:33 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:26.520 18:41:33 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:26.520 18:41:33 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:26.520 18:41:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:26.520 18:41:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:26.520 18:41:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:26.520 18:41:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:26.520 18:41:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:26.520 18:41:33 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:26.520 18:41:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:26.520 18:41:33 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:26.520 18:41:33 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:26.520 18:41:33 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:26.520 18:41:33 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:26.520 18:41:33 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:26.520 18:41:33 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:26.777 18:41:34 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:26.777 18:41:34 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:26.778 18:41:34 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:26.778 18:41:34 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:27.035 18:41:34 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:27.035 18:41:34 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:27.035 18:41:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:27.035 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.035 18:41:34 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:27.035 18:41:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:27.035 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.035 18:41:34 -- target/identify_passthru.sh@31 -- # nvmfpid=101193 00:25:27.035 18:41:34 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:27.035 18:41:34 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.035 18:41:34 -- target/identify_passthru.sh@35 -- # waitforlisten 101193 00:25:27.035 18:41:34 -- common/autotest_common.sh@819 -- # '[' -z 101193 ']' 00:25:27.035 18:41:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.035 18:41:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.035 18:41:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.035 18:41:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.035 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.035 [2024-07-14 18:41:34.327213] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:27.035 [2024-07-14 18:41:34.327315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.294 [2024-07-14 18:41:34.464418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.294 [2024-07-14 18:41:34.529179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.294 [2024-07-14 18:41:34.529349] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.294 [2024-07-14 18:41:34.529362] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.294 [2024-07-14 18:41:34.529370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.294 [2024-07-14 18:41:34.529436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.294 [2024-07-14 18:41:34.529587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.294 [2024-07-14 18:41:34.530390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.294 [2024-07-14 18:41:34.530439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.294 18:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.294 18:41:34 -- common/autotest_common.sh@852 -- # return 0 00:25:27.294 18:41:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:27.294 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.294 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.294 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.294 18:41:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:27.294 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.294 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.294 [2024-07-14 18:41:34.692351] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:27.294 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.294 18:41:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.294 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.294 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.294 [2024-07-14 18:41:34.702396] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:27.552 18:41:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 18:41:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:27.552 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 Nvme0n1 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:27.552 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:27.552 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.552 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 [2024-07-14 18:41:34.848607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:27.552 18:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.552 18:41:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 [2024-07-14 18:41:34.856311] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:27.552 [ 00:25:27.552 { 00:25:27.552 "allow_any_host": true, 00:25:27.552 "hosts": [], 00:25:27.552 "listen_addresses": [], 00:25:27.552 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:27.552 "subtype": "Discovery" 00:25:27.552 }, 00:25:27.552 { 00:25:27.552 "allow_any_host": true, 00:25:27.552 "hosts": [], 00:25:27.552 "listen_addresses": [ 00:25:27.552 { 00:25:27.552 "adrfam": "IPv4", 00:25:27.552 "traddr": "10.0.0.2", 00:25:27.552 "transport": "TCP", 00:25:27.552 "trsvcid": "4420", 00:25:27.552 "trtype": "TCP" 00:25:27.552 } 00:25:27.552 ], 00:25:27.552 "max_cntlid": 65519, 00:25:27.552 "max_namespaces": 1, 00:25:27.552 "min_cntlid": 1, 00:25:27.552 "model_number": "SPDK bdev Controller", 00:25:27.552 "namespaces": [ 00:25:27.552 { 00:25:27.552 "bdev_name": "Nvme0n1", 00:25:27.552 "name": "Nvme0n1", 00:25:27.552 "nguid": "E01432AB23C449F7810ECAD0296D12D6", 00:25:27.552 "nsid": 1, 00:25:27.552 "uuid": "e01432ab-23c4-49f7-810e-cad0296d12d6" 00:25:27.552 } 00:25:27.552 ], 00:25:27.552 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.552 "serial_number": "SPDK00000000000001", 00:25:27.552 "subtype": "NVMe" 00:25:27.552 } 00:25:27.552 ] 00:25:27.552 18:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.552 18:41:34 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:27.552 18:41:34 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:27.552 18:41:34 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:27.810 18:41:35 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:27.810 18:41:35 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:27.810 18:41:35 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:27.810 18:41:35 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:28.069 18:41:35 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:28.069 18:41:35 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:28.069 18:41:35 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:28.069 18:41:35 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.069 18:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.069 18:41:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.069 18:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.069 18:41:35 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:28.069 18:41:35 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:28.069 18:41:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:28.069 18:41:35 -- nvmf/common.sh@116 -- # sync 00:25:28.069 18:41:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:28.069 18:41:35 -- nvmf/common.sh@119 -- # set +e 00:25:28.069 18:41:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:28.069 18:41:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:28.069 rmmod nvme_tcp 00:25:28.069 rmmod nvme_fabrics 00:25:28.069 rmmod nvme_keyring 00:25:28.069 18:41:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:28.069 18:41:35 -- nvmf/common.sh@123 -- # set -e 00:25:28.069 18:41:35 -- nvmf/common.sh@124 -- # return 0 00:25:28.069 18:41:35 -- nvmf/common.sh@477 -- # '[' -n 101193 ']' 00:25:28.069 18:41:35 -- nvmf/common.sh@478 -- # killprocess 101193 00:25:28.069 18:41:35 -- common/autotest_common.sh@926 -- # '[' -z 101193 ']' 00:25:28.069 18:41:35 -- common/autotest_common.sh@930 -- # kill -0 101193 00:25:28.069 18:41:35 -- common/autotest_common.sh@931 -- # uname 00:25:28.069 18:41:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:28.069 18:41:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101193 00:25:28.069 18:41:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:28.069 18:41:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:28.069 killing process with pid 101193 00:25:28.069 18:41:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101193' 00:25:28.069 18:41:35 -- common/autotest_common.sh@945 -- # kill 101193 00:25:28.069 [2024-07-14 18:41:35.455400] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:28.069 18:41:35 -- common/autotest_common.sh@950 -- # wait 101193 00:25:28.327 18:41:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:28.327 18:41:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:28.327 18:41:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:28.327 18:41:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.327 18:41:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:28.327 18:41:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.327 18:41:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:28.327 18:41:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.327 18:41:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:28.327 00:25:28.327 real 0m2.309s 00:25:28.327 user 0m4.726s 00:25:28.327 sys 0m0.761s 00:25:28.327 18:41:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.327 ************************************ 00:25:28.327 END TEST nvmf_identify_passthru 00:25:28.327 18:41:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.327 ************************************ 00:25:28.327 18:41:35 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:28.327 18:41:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.327 18:41:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.328 18:41:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 ************************************ 00:25:28.328 START TEST nvmf_dif 00:25:28.328 ************************************ 00:25:28.328 18:41:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:28.586 * Looking for test storage... 00:25:28.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:28.586 18:41:35 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.586 18:41:35 -- nvmf/common.sh@7 -- # uname -s 00:25:28.586 18:41:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.586 18:41:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.586 18:41:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.586 18:41:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.586 18:41:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.586 18:41:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.586 18:41:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.586 18:41:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.586 18:41:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.586 18:41:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:25:28.586 18:41:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:25:28.586 18:41:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.586 18:41:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.586 18:41:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.586 18:41:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.586 18:41:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.586 18:41:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.586 18:41:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.586 18:41:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.586 18:41:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.586 18:41:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.586 18:41:35 -- paths/export.sh@5 -- # export PATH 00:25:28.586 18:41:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.586 18:41:35 -- nvmf/common.sh@46 -- # : 0 00:25:28.586 18:41:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:28.586 18:41:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:28.586 18:41:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:28.586 18:41:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.586 18:41:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.586 18:41:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:28.586 18:41:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:28.586 18:41:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:28.586 18:41:35 -- target/dif.sh@15 -- # NULL_META=16 00:25:28.586 18:41:35 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:28.586 18:41:35 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:28.586 18:41:35 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:28.586 18:41:35 -- target/dif.sh@135 -- # nvmftestinit 00:25:28.586 18:41:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:28.586 18:41:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.586 18:41:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:28.586 18:41:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:28.586 18:41:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:28.586 18:41:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.586 18:41:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:28.586 18:41:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.586 18:41:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:28.586 18:41:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:28.586 18:41:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.586 18:41:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.586 18:41:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:28.586 18:41:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:28.586 18:41:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.586 18:41:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.586 18:41:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.586 18:41:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.586 18:41:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.586 18:41:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.586 18:41:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.586 18:41:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.587 18:41:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:28.587 18:41:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:28.587 Cannot find device "nvmf_tgt_br" 00:25:28.587 18:41:35 -- nvmf/common.sh@154 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.587 Cannot find device "nvmf_tgt_br2" 00:25:28.587 18:41:35 -- nvmf/common.sh@155 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:28.587 18:41:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:28.587 Cannot find device "nvmf_tgt_br" 00:25:28.587 18:41:35 -- nvmf/common.sh@157 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:28.587 Cannot find device "nvmf_tgt_br2" 00:25:28.587 18:41:35 -- nvmf/common.sh@158 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:28.587 18:41:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:28.587 18:41:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.587 18:41:35 -- nvmf/common.sh@161 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.587 18:41:35 -- nvmf/common.sh@162 -- # true 00:25:28.587 18:41:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.587 18:41:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.587 18:41:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:28.587 18:41:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:28.587 18:41:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:28.587 18:41:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:28.844 18:41:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:28.844 18:41:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:28.844 18:41:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:28.844 18:41:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:28.844 18:41:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:28.844 18:41:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:28.844 18:41:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:28.844 18:41:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:28.844 18:41:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:28.845 18:41:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:28.845 18:41:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:28.845 18:41:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:28.845 18:41:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:28.845 18:41:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:28.845 18:41:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:28.845 18:41:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:28.845 18:41:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:28.845 18:41:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:28.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:28.845 00:25:28.845 --- 10.0.0.2 ping statistics --- 00:25:28.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.845 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:28.845 18:41:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:28.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:28.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:28.845 00:25:28.845 --- 10.0.0.3 ping statistics --- 00:25:28.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.845 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:28.845 18:41:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:28.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:28.845 00:25:28.845 --- 10.0.0.1 ping statistics --- 00:25:28.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.845 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:28.845 18:41:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.845 18:41:36 -- nvmf/common.sh@421 -- # return 0 00:25:28.845 18:41:36 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:28.845 18:41:36 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:29.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.102 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:29.103 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:29.103 18:41:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.103 18:41:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.103 18:41:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.103 18:41:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.103 18:41:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.103 18:41:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.362 18:41:36 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:29.362 18:41:36 -- target/dif.sh@137 -- # nvmfappstart 00:25:29.362 18:41:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:29.362 18:41:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.362 18:41:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.362 18:41:36 -- nvmf/common.sh@469 -- # nvmfpid=101529 00:25:29.362 18:41:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:29.362 18:41:36 -- nvmf/common.sh@470 -- # waitforlisten 101529 00:25:29.362 18:41:36 -- common/autotest_common.sh@819 -- # '[' -z 101529 ']' 00:25:29.362 18:41:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.362 18:41:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.362 18:41:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.362 18:41:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.362 18:41:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.362 [2024-07-14 18:41:36.600851] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:29.362 [2024-07-14 18:41:36.600938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.362 [2024-07-14 18:41:36.745997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.621 [2024-07-14 18:41:36.824015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:29.621 [2024-07-14 18:41:36.824178] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.621 [2024-07-14 18:41:36.824194] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.621 [2024-07-14 18:41:36.824205] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.621 [2024-07-14 18:41:36.824242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.189 18:41:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.189 18:41:37 -- common/autotest_common.sh@852 -- # return 0 00:25:30.189 18:41:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:30.189 18:41:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.189 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.189 18:41:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.189 18:41:37 -- target/dif.sh@139 -- # create_transport 00:25:30.189 18:41:37 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:30.189 18:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.189 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.189 [2024-07-14 18:41:37.589011] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.189 18:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.189 18:41:37 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:30.189 18:41:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:30.189 18:41:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.189 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.189 ************************************ 00:25:30.189 START TEST fio_dif_1_default 00:25:30.189 ************************************ 00:25:30.189 18:41:37 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:30.189 18:41:37 -- target/dif.sh@86 -- # create_subsystems 0 00:25:30.189 18:41:37 -- target/dif.sh@28 -- # local sub 00:25:30.189 18:41:37 -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.189 18:41:37 -- target/dif.sh@31 -- # create_subsystem 0 00:25:30.189 18:41:37 -- target/dif.sh@18 -- # local sub_id=0 00:25:30.189 18:41:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:30.189 18:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.189 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.448 bdev_null0 00:25:30.448 18:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.448 18:41:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:30.448 18:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.448 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.448 18:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.448 18:41:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:30.448 18:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.448 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.448 18:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.448 18:41:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.448 18:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.448 18:41:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.448 [2024-07-14 18:41:37.637132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.448 18:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.448 18:41:37 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:30.448 18:41:37 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:30.448 18:41:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:30.448 18:41:37 -- nvmf/common.sh@520 -- # config=() 00:25:30.448 18:41:37 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.448 18:41:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.448 18:41:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.448 { 00:25:30.448 "params": { 00:25:30.448 "name": "Nvme$subsystem", 00:25:30.448 "trtype": "$TEST_TRANSPORT", 00:25:30.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.448 "adrfam": "ipv4", 00:25:30.448 "trsvcid": "$NVMF_PORT", 00:25:30.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.448 "hdgst": ${hdgst:-false}, 00:25:30.448 "ddgst": ${ddgst:-false} 00:25:30.448 }, 00:25:30.448 "method": "bdev_nvme_attach_controller" 00:25:30.448 } 00:25:30.448 EOF 00:25:30.448 )") 00:25:30.448 18:41:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.448 18:41:37 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.448 18:41:37 -- target/dif.sh@82 -- # gen_fio_conf 00:25:30.448 18:41:37 -- target/dif.sh@54 -- # local file 00:25:30.448 18:41:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:30.448 18:41:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:30.448 18:41:37 -- target/dif.sh@56 -- # cat 00:25:30.448 18:41:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:30.448 18:41:37 -- nvmf/common.sh@542 -- # cat 00:25:30.448 18:41:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.448 18:41:37 -- common/autotest_common.sh@1320 -- # shift 00:25:30.448 18:41:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:30.448 18:41:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.448 18:41:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:30.448 18:41:37 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:30.448 18:41:37 -- nvmf/common.sh@544 -- # jq . 00:25:30.448 18:41:37 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.448 18:41:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.448 "params": { 00:25:30.448 "name": "Nvme0", 00:25:30.448 "trtype": "tcp", 00:25:30.448 "traddr": "10.0.0.2", 00:25:30.448 "adrfam": "ipv4", 00:25:30.448 "trsvcid": "4420", 00:25:30.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:30.448 "hdgst": false, 00:25:30.448 "ddgst": false 00:25:30.448 }, 00:25:30.448 "method": "bdev_nvme_attach_controller" 00:25:30.448 }' 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.448 18:41:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.448 18:41:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.448 18:41:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.448 18:41:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.448 18:41:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:30.448 18:41:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.448 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:30.448 fio-3.35 00:25:30.448 Starting 1 thread 00:25:31.016 [2024-07-14 18:41:38.271271] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:31.016 [2024-07-14 18:41:38.271356] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:41.013 00:25:41.013 filename0: (groupid=0, jobs=1): err= 0: pid=101609: Sun Jul 14 18:41:48 2024 00:25:41.013 read: IOPS=914, BW=3659KiB/s (3747kB/s)(35.8MiB/10009msec) 00:25:41.013 slat (nsec): min=6550, max=65526, avg=8512.13, stdev=3640.82 00:25:41.013 clat (usec): min=389, max=42471, avg=4346.67, stdev=11911.06 00:25:41.013 lat (usec): min=396, max=42487, avg=4355.18, stdev=11911.01 00:25:41.013 clat percentiles (usec): 00:25:41.013 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 433], 00:25:41.013 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 474], 00:25:41.013 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[ 570], 95.00th=[40633], 00:25:41.013 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:41.013 | 99.99th=[42730] 00:25:41.013 bw ( KiB/s): min= 2304, max= 5920, per=100.00%, avg=3744.00, stdev=1143.42, samples=19 00:25:41.013 iops : min= 576, max= 1480, avg=936.00, stdev=285.86, samples=19 00:25:41.013 lat (usec) : 500=79.16%, 750=11.18% 00:25:41.013 lat (msec) : 10=0.04%, 50=9.61% 00:25:41.013 cpu : usr=91.46%, sys=7.81%, ctx=24, majf=0, minf=0 00:25:41.013 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.013 issued rwts: total=9156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.013 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:41.013 00:25:41.013 Run status group 0 (all jobs): 00:25:41.013 READ: bw=3659KiB/s (3747kB/s), 3659KiB/s-3659KiB/s (3747kB/s-3747kB/s), io=35.8MiB (37.5MB), run=10009-10009msec 00:25:41.271 18:41:48 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:41.271 18:41:48 -- target/dif.sh@43 -- # local sub 00:25:41.271 18:41:48 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.271 18:41:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:41.271 18:41:48 -- target/dif.sh@36 -- # local sub_id=0 00:25:41.271 18:41:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.271 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.271 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.271 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.271 18:41:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:41.271 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.271 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 ************************************ 00:25:41.272 END TEST fio_dif_1_default 00:25:41.272 ************************************ 00:25:41.272 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.272 00:25:41.272 real 0m11.022s 00:25:41.272 user 0m9.811s 00:25:41.272 sys 0m1.045s 00:25:41.272 18:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.272 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 18:41:48 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:41.272 18:41:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:41.272 18:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.272 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 ************************************ 00:25:41.272 START TEST fio_dif_1_multi_subsystems 00:25:41.272 ************************************ 00:25:41.272 18:41:48 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:41.272 18:41:48 -- target/dif.sh@92 -- # local files=1 00:25:41.272 18:41:48 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:41.272 18:41:48 -- target/dif.sh@28 -- # local sub 00:25:41.272 18:41:48 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.272 18:41:48 -- target/dif.sh@31 -- # create_subsystem 0 00:25:41.272 18:41:48 -- target/dif.sh@18 -- # local sub_id=0 00:25:41.272 18:41:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:41.272 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.272 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 bdev_null0 00:25:41.272 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.272 18:41:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:41.272 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.272 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.272 18:41:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:41.272 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.272 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.530 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.530 18:41:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.531 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.531 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 [2024-07-14 18:41:48.706919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.531 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.531 18:41:48 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.531 18:41:48 -- target/dif.sh@31 -- # create_subsystem 1 00:25:41.531 18:41:48 -- target/dif.sh@18 -- # local sub_id=1 00:25:41.531 18:41:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:41.531 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.531 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 bdev_null1 00:25:41.531 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.531 18:41:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:41.531 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.531 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.531 18:41:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:41.531 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.531 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.531 18:41:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.531 18:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.531 18:41:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 18:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.531 18:41:48 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:41.531 18:41:48 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:41.531 18:41:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:41.531 18:41:48 -- nvmf/common.sh@520 -- # config=() 00:25:41.531 18:41:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.531 18:41:48 -- nvmf/common.sh@520 -- # local subsystem config 00:25:41.531 18:41:48 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.531 18:41:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.531 18:41:48 -- target/dif.sh@82 -- # gen_fio_conf 00:25:41.531 18:41:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:41.531 18:41:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.531 { 00:25:41.531 "params": { 00:25:41.531 "name": "Nvme$subsystem", 00:25:41.531 "trtype": "$TEST_TRANSPORT", 00:25:41.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.531 "adrfam": "ipv4", 00:25:41.531 "trsvcid": "$NVMF_PORT", 00:25:41.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.531 "hdgst": ${hdgst:-false}, 00:25:41.531 "ddgst": ${ddgst:-false} 00:25:41.531 }, 00:25:41.531 "method": "bdev_nvme_attach_controller" 00:25:41.531 } 00:25:41.531 EOF 00:25:41.531 )") 00:25:41.531 18:41:48 -- target/dif.sh@54 -- # local file 00:25:41.531 18:41:48 -- target/dif.sh@56 -- # cat 00:25:41.531 18:41:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:41.531 18:41:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:41.531 18:41:48 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.531 18:41:48 -- common/autotest_common.sh@1320 -- # shift 00:25:41.531 18:41:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:41.531 18:41:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.531 18:41:48 -- nvmf/common.sh@542 -- # cat 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.531 18:41:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:41.531 18:41:48 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.531 18:41:48 -- target/dif.sh@73 -- # cat 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:41.531 18:41:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.531 18:41:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.531 { 00:25:41.531 "params": { 00:25:41.531 "name": "Nvme$subsystem", 00:25:41.531 "trtype": "$TEST_TRANSPORT", 00:25:41.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.531 "adrfam": "ipv4", 00:25:41.531 "trsvcid": "$NVMF_PORT", 00:25:41.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.531 "hdgst": ${hdgst:-false}, 00:25:41.531 "ddgst": ${ddgst:-false} 00:25:41.531 }, 00:25:41.531 "method": "bdev_nvme_attach_controller" 00:25:41.531 } 00:25:41.531 EOF 00:25:41.531 )") 00:25:41.531 18:41:48 -- nvmf/common.sh@542 -- # cat 00:25:41.531 18:41:48 -- target/dif.sh@72 -- # (( file++ )) 00:25:41.531 18:41:48 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.531 18:41:48 -- nvmf/common.sh@544 -- # jq . 00:25:41.531 18:41:48 -- nvmf/common.sh@545 -- # IFS=, 00:25:41.531 18:41:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:41.531 "params": { 00:25:41.531 "name": "Nvme0", 00:25:41.531 "trtype": "tcp", 00:25:41.531 "traddr": "10.0.0.2", 00:25:41.531 "adrfam": "ipv4", 00:25:41.531 "trsvcid": "4420", 00:25:41.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.531 "hdgst": false, 00:25:41.531 "ddgst": false 00:25:41.531 }, 00:25:41.531 "method": "bdev_nvme_attach_controller" 00:25:41.531 },{ 00:25:41.531 "params": { 00:25:41.531 "name": "Nvme1", 00:25:41.531 "trtype": "tcp", 00:25:41.531 "traddr": "10.0.0.2", 00:25:41.531 "adrfam": "ipv4", 00:25:41.531 "trsvcid": "4420", 00:25:41.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.531 "hdgst": false, 00:25:41.531 "ddgst": false 00:25:41.531 }, 00:25:41.531 "method": "bdev_nvme_attach_controller" 00:25:41.531 }' 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:41.531 18:41:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:41.531 18:41:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:41.531 18:41:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:41.531 18:41:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:41.531 18:41:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:41.531 18:41:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.531 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:41.531 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:41.531 fio-3.35 00:25:41.531 Starting 2 threads 00:25:42.099 [2024-07-14 18:41:49.483199] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:42.099 [2024-07-14 18:41:49.483273] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:54.326 00:25:54.326 filename0: (groupid=0, jobs=1): err= 0: pid=101768: Sun Jul 14 18:41:59 2024 00:25:54.326 read: IOPS=208, BW=834KiB/s (854kB/s)(8336KiB/10001msec) 00:25:54.326 slat (usec): min=6, max=161, avg= 8.67, stdev= 5.84 00:25:54.326 clat (usec): min=370, max=42539, avg=19168.32, stdev=20174.85 00:25:54.326 lat (usec): min=377, max=42564, avg=19176.98, stdev=20174.78 00:25:54.326 clat percentiles (usec): 00:25:54.326 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:25:54.326 | 30.00th=[ 437], 40.00th=[ 474], 50.00th=[ 553], 60.00th=[40633], 00:25:54.326 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:54.326 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:25:54.326 | 99.99th=[42730] 00:25:54.326 bw ( KiB/s): min= 544, max= 1632, per=53.05%, avg=818.42, stdev=263.98, samples=19 00:25:54.326 iops : min= 136, max= 408, avg=204.58, stdev=66.00, samples=19 00:25:54.326 lat (usec) : 500=46.69%, 750=6.00%, 1000=0.86% 00:25:54.326 lat (msec) : 2=0.19%, 50=46.26% 00:25:54.326 cpu : usr=94.69%, sys=4.63%, ctx=229, majf=0, minf=0 00:25:54.326 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.326 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.326 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:54.326 filename1: (groupid=0, jobs=1): err= 0: pid=101769: Sun Jul 14 18:41:59 2024 00:25:54.326 read: IOPS=177, BW=709KiB/s (726kB/s)(7088KiB/10003msec) 00:25:54.326 slat (nsec): min=6243, max=40360, avg=8399.23, stdev=3241.73 00:25:54.326 clat (usec): min=375, max=42831, avg=22552.29, stdev=20139.09 00:25:54.326 lat (usec): min=381, max=42849, avg=22560.69, stdev=20138.94 00:25:54.326 clat percentiles (usec): 00:25:54.326 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 433], 00:25:54.326 | 30.00th=[ 465], 40.00th=[ 506], 50.00th=[40633], 60.00th=[40633], 00:25:54.326 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:54.326 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:54.326 | 99.99th=[42730] 00:25:54.326 bw ( KiB/s): min= 480, max= 1056, per=46.63%, avg=719.05, stdev=144.40, samples=19 00:25:54.326 iops : min= 120, max= 264, avg=179.74, stdev=36.09, samples=19 00:25:54.326 lat (usec) : 500=39.28%, 750=5.25%, 1000=0.62% 00:25:54.326 lat (msec) : 2=0.23%, 50=54.63% 00:25:54.326 cpu : usr=95.60%, sys=4.02%, ctx=16, majf=0, minf=0 00:25:54.326 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.326 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.326 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:54.326 00:25:54.326 Run status group 0 (all jobs): 00:25:54.326 READ: bw=1542KiB/s (1579kB/s), 709KiB/s-834KiB/s (726kB/s-854kB/s), io=15.1MiB (15.8MB), run=10001-10003msec 00:25:54.326 18:41:59 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:54.326 18:41:59 -- target/dif.sh@43 -- # local sub 00:25:54.326 18:41:59 -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.326 18:41:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:54.326 18:41:59 -- target/dif.sh@36 -- # local sub_id=0 00:25:54.326 18:41:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.326 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.326 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.326 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.326 18:41:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:54.326 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.326 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.326 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.326 18:41:59 -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.326 18:41:59 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:54.326 18:41:59 -- target/dif.sh@36 -- # local sub_id=1 00:25:54.326 18:41:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.326 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.326 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.326 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.326 18:41:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:54.326 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.326 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.326 ************************************ 00:25:54.326 END TEST fio_dif_1_multi_subsystems 00:25:54.326 ************************************ 00:25:54.326 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.326 00:25:54.326 real 0m11.168s 00:25:54.326 user 0m19.837s 00:25:54.326 sys 0m1.127s 00:25:54.326 18:41:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.326 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.326 18:41:59 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:54.327 18:41:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:54.327 18:41:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.327 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.327 ************************************ 00:25:54.327 START TEST fio_dif_rand_params 00:25:54.327 ************************************ 00:25:54.327 18:41:59 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:54.327 18:41:59 -- target/dif.sh@100 -- # local NULL_DIF 00:25:54.327 18:41:59 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:54.327 18:41:59 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:54.327 18:41:59 -- target/dif.sh@103 -- # bs=128k 00:25:54.327 18:41:59 -- target/dif.sh@103 -- # numjobs=3 00:25:54.327 18:41:59 -- target/dif.sh@103 -- # iodepth=3 00:25:54.327 18:41:59 -- target/dif.sh@103 -- # runtime=5 00:25:54.327 18:41:59 -- target/dif.sh@105 -- # create_subsystems 0 00:25:54.327 18:41:59 -- target/dif.sh@28 -- # local sub 00:25:54.327 18:41:59 -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.327 18:41:59 -- target/dif.sh@31 -- # create_subsystem 0 00:25:54.327 18:41:59 -- target/dif.sh@18 -- # local sub_id=0 00:25:54.327 18:41:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:54.327 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.327 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.327 bdev_null0 00:25:54.327 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.327 18:41:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:54.327 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.327 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.327 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.327 18:41:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:54.327 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.327 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.327 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.327 18:41:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.327 18:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.327 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.327 [2024-07-14 18:41:59.932492] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.327 18:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.327 18:41:59 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:54.327 18:41:59 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:54.327 18:41:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:54.327 18:41:59 -- nvmf/common.sh@520 -- # config=() 00:25:54.327 18:41:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.327 18:41:59 -- nvmf/common.sh@520 -- # local subsystem config 00:25:54.327 18:41:59 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.327 18:41:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.327 18:41:59 -- target/dif.sh@82 -- # gen_fio_conf 00:25:54.327 18:41:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.327 { 00:25:54.327 "params": { 00:25:54.327 "name": "Nvme$subsystem", 00:25:54.327 "trtype": "$TEST_TRANSPORT", 00:25:54.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.327 "adrfam": "ipv4", 00:25:54.327 "trsvcid": "$NVMF_PORT", 00:25:54.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.327 "hdgst": ${hdgst:-false}, 00:25:54.327 "ddgst": ${ddgst:-false} 00:25:54.327 }, 00:25:54.327 "method": "bdev_nvme_attach_controller" 00:25:54.327 } 00:25:54.327 EOF 00:25:54.327 )") 00:25:54.327 18:41:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:54.327 18:41:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.327 18:41:59 -- target/dif.sh@54 -- # local file 00:25:54.327 18:41:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:54.327 18:41:59 -- target/dif.sh@56 -- # cat 00:25:54.327 18:41:59 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.327 18:41:59 -- common/autotest_common.sh@1320 -- # shift 00:25:54.327 18:41:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:54.327 18:41:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.327 18:41:59 -- nvmf/common.sh@542 -- # cat 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:54.327 18:41:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:54.327 18:41:59 -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:54.327 18:41:59 -- nvmf/common.sh@544 -- # jq . 00:25:54.327 18:41:59 -- nvmf/common.sh@545 -- # IFS=, 00:25:54.327 18:41:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:54.327 "params": { 00:25:54.327 "name": "Nvme0", 00:25:54.327 "trtype": "tcp", 00:25:54.327 "traddr": "10.0.0.2", 00:25:54.327 "adrfam": "ipv4", 00:25:54.327 "trsvcid": "4420", 00:25:54.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.327 "hdgst": false, 00:25:54.327 "ddgst": false 00:25:54.327 }, 00:25:54.327 "method": "bdev_nvme_attach_controller" 00:25:54.327 }' 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:54.327 18:41:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:54.327 18:41:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:54.327 18:41:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:54.327 18:42:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:54.327 18:42:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:54.327 18:42:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:54.327 18:42:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.327 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:54.327 ... 00:25:54.327 fio-3.35 00:25:54.327 Starting 3 threads 00:25:54.327 [2024-07-14 18:42:00.574312] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:54.327 [2024-07-14 18:42:00.574396] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:58.514 00:25:58.514 filename0: (groupid=0, jobs=1): err= 0: pid=101925: Sun Jul 14 18:42:05 2024 00:25:58.514 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5004msec) 00:25:58.514 slat (nsec): min=6448, max=79198, avg=13147.98, stdev=6559.03 00:25:58.514 clat (usec): min=3752, max=54467, avg=12875.39, stdev=9129.78 00:25:58.514 lat (usec): min=3762, max=54487, avg=12888.54, stdev=9130.05 00:25:58.514 clat percentiles (usec): 00:25:58.514 | 1.00th=[ 4359], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8291], 00:25:58.514 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:25:58.514 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13829], 95.00th=[46924], 00:25:58.514 | 99.00th=[52691], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:25:58.514 | 99.99th=[54264] 00:25:58.514 bw ( KiB/s): min=26880, max=33280, per=33.08%, avg=30321.78, stdev=2614.18, samples=9 00:25:58.514 iops : min= 210, max= 260, avg=236.89, stdev=20.42, samples=9 00:25:58.514 lat (msec) : 4=0.09%, 10=27.23%, 20=67.53%, 50=2.41%, 100=2.75% 00:25:58.514 cpu : usr=93.32%, sys=5.12%, ctx=46, majf=0, minf=9 00:25:58.514 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:58.514 filename0: (groupid=0, jobs=1): err= 0: pid=101926: Sun Jul 14 18:42:05 2024 00:25:58.514 read: IOPS=232, BW=29.0MiB/s (30.4MB/s)(146MiB/5046msec) 00:25:58.514 slat (usec): min=6, max=106, avg=15.24, stdev= 9.02 00:25:58.514 clat (usec): min=4372, max=53335, avg=12834.50, stdev=10187.50 00:25:58.514 lat (usec): min=4383, max=53343, avg=12849.73, stdev=10188.74 00:25:58.514 clat percentiles (usec): 00:25:58.514 | 1.00th=[ 4817], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9110], 00:25:58.514 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:25:58.514 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[48497], 00:25:58.514 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:25:58.514 | 99.99th=[53216] 00:25:58.514 bw ( KiB/s): min=19968, max=38656, per=32.68%, avg=29952.00, stdev=6986.92, samples=10 00:25:58.514 iops : min= 156, max= 302, avg=234.00, stdev=54.59, samples=10 00:25:58.514 lat (msec) : 10=35.95%, 20=57.30%, 50=2.82%, 100=3.93% 00:25:58.514 cpu : usr=93.02%, sys=5.15%, ctx=9, majf=0, minf=0 00:25:58.514 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 issued rwts: total=1171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:58.514 filename0: (groupid=0, jobs=1): err= 0: pid=101927: Sun Jul 14 18:42:05 2024 00:25:58.514 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5002msec) 00:25:58.514 slat (nsec): min=6421, max=55118, avg=11023.71, stdev=6151.36 00:25:58.514 clat (usec): min=3119, max=52108, avg=11714.83, stdev=5246.55 00:25:58.514 lat (usec): min=3129, max=52116, avg=11725.85, stdev=5246.87 00:25:58.514 clat percentiles (usec): 00:25:58.514 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 8356], 00:25:58.514 | 30.00th=[ 9110], 40.00th=[10421], 50.00th=[13173], 60.00th=[13960], 00:25:58.514 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15664], 95.00th=[16057], 00:25:58.514 | 99.00th=[17171], 99.50th=[45876], 99.90th=[51643], 99.95th=[52167], 00:25:58.514 | 99.99th=[52167] 00:25:58.514 bw ( KiB/s): min=24576, max=42240, per=36.23%, avg=33201.11, stdev=6395.48, samples=9 00:25:58.514 iops : min= 192, max= 330, avg=259.33, stdev=50.00, samples=9 00:25:58.514 lat (msec) : 4=1.80%, 10=37.40%, 20=59.86%, 50=0.70%, 100=0.23% 00:25:58.514 cpu : usr=93.80%, sys=4.66%, ctx=4, majf=0, minf=9 00:25:58.514 IO depths : 1=30.8%, 2=69.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.514 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:58.514 00:25:58.514 Run status group 0 (all jobs): 00:25:58.514 READ: bw=89.5MiB/s (93.8MB/s), 29.0MiB/s-31.9MiB/s (30.4MB/s-33.5MB/s), io=452MiB (474MB), run=5002-5046msec 00:25:58.773 18:42:05 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:58.773 18:42:05 -- target/dif.sh@43 -- # local sub 00:25:58.773 18:42:05 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.773 18:42:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.773 18:42:05 -- target/dif.sh@36 -- # local sub_id=0 00:25:58.773 18:42:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # bs=4k 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # numjobs=8 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # iodepth=16 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # runtime= 00:25:58.773 18:42:05 -- target/dif.sh@109 -- # files=2 00:25:58.773 18:42:05 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:58.773 18:42:05 -- target/dif.sh@28 -- # local sub 00:25:58.773 18:42:05 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.773 18:42:05 -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.773 18:42:05 -- target/dif.sh@18 -- # local sub_id=0 00:25:58.773 18:42:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 bdev_null0 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 [2024-07-14 18:42:05.992833] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.773 18:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:05 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.773 18:42:05 -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.773 18:42:05 -- target/dif.sh@18 -- # local sub_id=1 00:25:58.773 18:42:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:58.773 18:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 bdev_null1 00:25:58.773 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.773 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.773 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.773 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:06 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.773 18:42:06 -- target/dif.sh@31 -- # create_subsystem 2 00:25:58.773 18:42:06 -- target/dif.sh@18 -- # local sub_id=2 00:25:58.773 18:42:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:58.773 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.773 bdev_null2 00:25:58.773 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.773 18:42:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:58.773 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.773 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.774 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.774 18:42:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:58.774 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.774 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.774 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.774 18:42:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.774 18:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.774 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:25:58.774 18:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.774 18:42:06 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:58.774 18:42:06 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:58.774 18:42:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.774 18:42:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:58.774 18:42:06 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.774 18:42:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:58.774 18:42:06 -- nvmf/common.sh@520 -- # config=() 00:25:58.774 18:42:06 -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.774 18:42:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.774 18:42:06 -- nvmf/common.sh@520 -- # local subsystem config 00:25:58.774 18:42:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:58.774 18:42:06 -- target/dif.sh@54 -- # local file 00:25:58.774 18:42:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.774 18:42:06 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.774 18:42:06 -- target/dif.sh@56 -- # cat 00:25:58.774 18:42:06 -- common/autotest_common.sh@1320 -- # shift 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.774 { 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme$subsystem", 00:25:58.774 "trtype": "$TEST_TRANSPORT", 00:25:58.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "$NVMF_PORT", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.774 "hdgst": ${hdgst:-false}, 00:25:58.774 "ddgst": ${ddgst:-false} 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 } 00:25:58.774 EOF 00:25:58.774 )") 00:25:58.774 18:42:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:58.774 18:42:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # cat 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.774 18:42:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.774 18:42:06 -- target/dif.sh@73 -- # cat 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.774 { 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme$subsystem", 00:25:58.774 "trtype": "$TEST_TRANSPORT", 00:25:58.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "$NVMF_PORT", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.774 "hdgst": ${hdgst:-false}, 00:25:58.774 "ddgst": ${ddgst:-false} 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 } 00:25:58.774 EOF 00:25:58.774 )") 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # cat 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file++ )) 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.774 18:42:06 -- target/dif.sh@73 -- # cat 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file++ )) 00:25:58.774 18:42:06 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.774 18:42:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.774 { 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme$subsystem", 00:25:58.774 "trtype": "$TEST_TRANSPORT", 00:25:58.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "$NVMF_PORT", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.774 "hdgst": ${hdgst:-false}, 00:25:58.774 "ddgst": ${ddgst:-false} 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 } 00:25:58.774 EOF 00:25:58.774 )") 00:25:58.774 18:42:06 -- nvmf/common.sh@542 -- # cat 00:25:58.774 18:42:06 -- nvmf/common.sh@544 -- # jq . 00:25:58.774 18:42:06 -- nvmf/common.sh@545 -- # IFS=, 00:25:58.774 18:42:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme0", 00:25:58.774 "trtype": "tcp", 00:25:58.774 "traddr": "10.0.0.2", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "4420", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.774 "hdgst": false, 00:25:58.774 "ddgst": false 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 },{ 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme1", 00:25:58.774 "trtype": "tcp", 00:25:58.774 "traddr": "10.0.0.2", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "4420", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.774 "hdgst": false, 00:25:58.774 "ddgst": false 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 },{ 00:25:58.774 "params": { 00:25:58.774 "name": "Nvme2", 00:25:58.774 "trtype": "tcp", 00:25:58.774 "traddr": "10.0.0.2", 00:25:58.774 "adrfam": "ipv4", 00:25:58.774 "trsvcid": "4420", 00:25:58.774 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:58.774 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:58.774 "hdgst": false, 00:25:58.774 "ddgst": false 00:25:58.774 }, 00:25:58.774 "method": "bdev_nvme_attach_controller" 00:25:58.774 }' 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.774 18:42:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.774 18:42:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.774 18:42:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.774 18:42:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.774 18:42:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:58.774 18:42:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.033 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:59.033 ... 00:25:59.033 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:59.033 ... 00:25:59.033 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:59.033 ... 00:25:59.033 fio-3.35 00:25:59.033 Starting 24 threads 00:25:59.599 [2024-07-14 18:42:06.903407] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:59.599 [2024-07-14 18:42:06.903484] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:11.829 00:26:11.829 filename0: (groupid=0, jobs=1): err= 0: pid=102023: Sun Jul 14 18:42:17 2024 00:26:11.829 read: IOPS=198, BW=793KiB/s (812kB/s)(7960KiB/10044msec) 00:26:11.829 slat (usec): min=4, max=8037, avg=21.90, stdev=254.30 00:26:11.829 clat (msec): min=36, max=152, avg=80.48, stdev=20.33 00:26:11.829 lat (msec): min=36, max=152, avg=80.50, stdev=20.33 00:26:11.829 clat percentiles (msec): 00:26:11.829 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:26:11.829 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:26:11.829 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:26:11.829 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:26:11.829 | 99.99th=[ 153] 00:26:11.829 bw ( KiB/s): min= 640, max= 928, per=3.59%, avg=789.60, stdev=64.48, samples=20 00:26:11.829 iops : min= 160, max= 232, avg=197.40, stdev=16.12, samples=20 00:26:11.829 lat (msec) : 50=6.53%, 100=77.94%, 250=15.53% 00:26:11.829 cpu : usr=33.98%, sys=0.60%, ctx=923, majf=0, minf=9 00:26:11.829 IO depths : 1=3.2%, 2=6.9%, 4=18.2%, 8=62.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:11.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.829 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.829 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.829 filename0: (groupid=0, jobs=1): err= 0: pid=102024: Sun Jul 14 18:42:17 2024 00:26:11.829 read: IOPS=209, BW=836KiB/s (856kB/s)(8384KiB/10025msec) 00:26:11.829 slat (usec): min=4, max=1031, avg=13.84, stdev=24.35 00:26:11.829 clat (msec): min=37, max=177, avg=76.43, stdev=19.36 00:26:11.829 lat (msec): min=37, max=177, avg=76.44, stdev=19.36 00:26:11.829 clat percentiles (msec): 00:26:11.829 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:26:11.829 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:26:11.829 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 108], 00:26:11.829 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 178], 99.95th=[ 178], 00:26:11.829 | 99.99th=[ 178] 00:26:11.830 bw ( KiB/s): min= 656, max= 1024, per=3.76%, avg=826.11, stdev=103.48, samples=19 00:26:11.830 iops : min= 164, max= 256, avg=206.53, stdev=25.87, samples=19 00:26:11.830 lat (msec) : 50=7.82%, 100=81.97%, 250=10.21% 00:26:11.830 cpu : usr=43.02%, sys=0.57%, ctx=1378, majf=0, minf=9 00:26:11.830 IO depths : 1=2.3%, 2=5.0%, 4=14.6%, 8=67.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102025: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=203, BW=815KiB/s (834kB/s)(8160KiB/10018msec) 00:26:11.830 slat (usec): min=4, max=8023, avg=25.43, stdev=275.80 00:26:11.830 clat (msec): min=23, max=155, avg=78.39, stdev=19.78 00:26:11.830 lat (msec): min=23, max=155, avg=78.42, stdev=19.78 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 30], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 64], 00:26:11.830 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:26:11.830 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 112], 00:26:11.830 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:26:11.830 | 99.99th=[ 157] 00:26:11.830 bw ( KiB/s): min= 640, max= 1024, per=3.67%, avg=805.89, stdev=99.41, samples=19 00:26:11.830 iops : min= 160, max= 256, avg=201.47, stdev=24.85, samples=19 00:26:11.830 lat (msec) : 50=5.54%, 100=82.35%, 250=12.11% 00:26:11.830 cpu : usr=42.81%, sys=0.55%, ctx=1529, majf=0, minf=9 00:26:11.830 IO depths : 1=3.4%, 2=7.6%, 4=18.9%, 8=60.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102026: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=229, BW=920KiB/s (942kB/s)(9256KiB/10065msec) 00:26:11.830 slat (usec): min=4, max=8029, avg=21.01, stdev=217.35 00:26:11.830 clat (msec): min=23, max=155, avg=69.34, stdev=21.78 00:26:11.830 lat (msec): min=23, max=155, avg=69.36, stdev=21.79 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 50], 00:26:11.830 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:11.830 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 108], 00:26:11.830 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:11.830 | 99.99th=[ 157] 00:26:11.830 bw ( KiB/s): min= 640, max= 1152, per=4.19%, avg=919.05, stdev=131.12, samples=20 00:26:11.830 iops : min= 160, max= 288, avg=229.75, stdev=32.77, samples=20 00:26:11.830 lat (msec) : 50=20.57%, 100=71.22%, 250=8.21% 00:26:11.830 cpu : usr=40.13%, sys=0.61%, ctx=1112, majf=0, minf=9 00:26:11.830 IO depths : 1=0.8%, 2=1.7%, 4=8.8%, 8=75.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102027: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=201, BW=807KiB/s (827kB/s)(8084KiB/10012msec) 00:26:11.830 slat (nsec): min=5044, max=77770, avg=13607.07, stdev=8060.00 00:26:11.830 clat (msec): min=21, max=184, avg=79.09, stdev=22.82 00:26:11.830 lat (msec): min=21, max=184, avg=79.10, stdev=22.82 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:26:11.830 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:26:11.830 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:26:11.830 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 184], 99.95th=[ 184], 00:26:11.830 | 99.99th=[ 184] 00:26:11.830 bw ( KiB/s): min= 672, max= 1000, per=3.64%, avg=799.16, stdev=83.94, samples=19 00:26:11.830 iops : min= 168, max= 250, avg=199.79, stdev=20.99, samples=19 00:26:11.830 lat (msec) : 50=11.13%, 100=74.22%, 250=14.65% 00:26:11.830 cpu : usr=32.61%, sys=0.56%, ctx=914, majf=0, minf=9 00:26:11.830 IO depths : 1=2.5%, 2=5.4%, 4=15.3%, 8=66.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102028: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=229, BW=919KiB/s (941kB/s)(9204KiB/10016msec) 00:26:11.830 slat (usec): min=3, max=8024, avg=26.67, stdev=301.94 00:26:11.830 clat (msec): min=15, max=161, avg=69.50, stdev=20.42 00:26:11.830 lat (msec): min=15, max=161, avg=69.53, stdev=20.42 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:26:11.830 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:26:11.830 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:26:11.830 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 163], 99.95th=[ 163], 00:26:11.830 | 99.99th=[ 163] 00:26:11.830 bw ( KiB/s): min= 728, max= 1128, per=4.14%, avg=909.95, stdev=121.08, samples=19 00:26:11.830 iops : min= 182, max= 282, avg=227.47, stdev=30.27, samples=19 00:26:11.830 lat (msec) : 20=0.26%, 50=20.47%, 100=71.93%, 250=7.34% 00:26:11.830 cpu : usr=38.43%, sys=0.57%, ctx=1011, majf=0, minf=9 00:26:11.830 IO depths : 1=1.8%, 2=3.9%, 4=11.3%, 8=71.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102029: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.79MiB/10048msec) 00:26:11.830 slat (usec): min=5, max=4034, avg=15.24, stdev=113.17 00:26:11.830 clat (msec): min=12, max=173, avg=63.94, stdev=22.17 00:26:11.830 lat (msec): min=12, max=173, avg=63.96, stdev=22.17 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 47], 00:26:11.830 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 67], 00:26:11.830 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 92], 95.00th=[ 105], 00:26:11.830 | 99.00th=[ 134], 99.50th=[ 146], 99.90th=[ 174], 99.95th=[ 174], 00:26:11.830 | 99.99th=[ 174] 00:26:11.830 bw ( KiB/s): min= 744, max= 1640, per=4.55%, avg=998.80, stdev=198.15, samples=20 00:26:11.830 iops : min= 186, max= 410, avg=249.70, stdev=49.54, samples=20 00:26:11.830 lat (msec) : 20=1.28%, 50=29.00%, 100=63.94%, 250=5.78% 00:26:11.830 cpu : usr=43.70%, sys=0.80%, ctx=1255, majf=0, minf=9 00:26:11.830 IO depths : 1=0.8%, 2=1.8%, 4=8.7%, 8=75.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename0: (groupid=0, jobs=1): err= 0: pid=102030: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=203, BW=813KiB/s (832kB/s)(8140KiB/10017msec) 00:26:11.830 slat (usec): min=3, max=7042, avg=19.27, stdev=190.57 00:26:11.830 clat (msec): min=32, max=175, avg=78.61, stdev=22.78 00:26:11.830 lat (msec): min=32, max=175, avg=78.63, stdev=22.79 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:26:11.830 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:26:11.830 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 123], 00:26:11.830 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:26:11.830 | 99.99th=[ 176] 00:26:11.830 bw ( KiB/s): min= 640, max= 944, per=3.67%, avg=805.47, stdev=86.49, samples=19 00:26:11.830 iops : min= 160, max= 236, avg=201.37, stdev=21.62, samples=19 00:26:11.830 lat (msec) : 50=8.94%, 100=75.18%, 250=15.87% 00:26:11.830 cpu : usr=33.35%, sys=0.54%, ctx=922, majf=0, minf=9 00:26:11.830 IO depths : 1=1.8%, 2=3.9%, 4=13.4%, 8=69.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename1: (groupid=0, jobs=1): err= 0: pid=102031: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=227, BW=909KiB/s (931kB/s)(9148KiB/10059msec) 00:26:11.830 slat (usec): min=3, max=8032, avg=22.84, stdev=290.17 00:26:11.830 clat (msec): min=24, max=181, avg=70.23, stdev=21.08 00:26:11.830 lat (msec): min=24, max=181, avg=70.25, stdev=21.09 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 51], 00:26:11.830 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:26:11.830 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:26:11.830 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 182], 99.95th=[ 182], 00:26:11.830 | 99.99th=[ 182] 00:26:11.830 bw ( KiB/s): min= 688, max= 1088, per=4.14%, avg=908.30, stdev=103.30, samples=20 00:26:11.830 iops : min= 172, max= 272, avg=227.05, stdev=25.83, samples=20 00:26:11.830 lat (msec) : 50=18.85%, 100=73.37%, 250=7.78% 00:26:11.830 cpu : usr=32.42%, sys=0.51%, ctx=900, majf=0, minf=9 00:26:11.830 IO depths : 1=0.7%, 2=1.6%, 4=8.1%, 8=76.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:11.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.830 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.830 filename1: (groupid=0, jobs=1): err= 0: pid=102032: Sun Jul 14 18:42:17 2024 00:26:11.830 read: IOPS=246, BW=987KiB/s (1011kB/s)(9928KiB/10054msec) 00:26:11.830 slat (usec): min=4, max=8100, avg=26.90, stdev=310.47 00:26:11.830 clat (msec): min=7, max=129, avg=64.49, stdev=20.49 00:26:11.830 lat (msec): min=7, max=129, avg=64.52, stdev=20.49 00:26:11.830 clat percentiles (msec): 00:26:11.830 | 1.00th=[ 20], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:26:11.830 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:26:11.830 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 99], 00:26:11.830 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 130], 99.95th=[ 130], 00:26:11.830 | 99.99th=[ 130] 00:26:11.830 bw ( KiB/s): min= 766, max= 1384, per=4.50%, avg=988.70, stdev=168.30, samples=20 00:26:11.830 iops : min= 191, max= 346, avg=247.15, stdev=42.11, samples=20 00:26:11.830 lat (msec) : 10=0.28%, 20=0.89%, 50=25.79%, 100=68.13%, 250=4.92% 00:26:11.831 cpu : usr=39.64%, sys=0.79%, ctx=1247, majf=0, minf=9 00:26:11.831 IO depths : 1=1.5%, 2=3.3%, 4=11.3%, 8=72.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102033: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=223, BW=892KiB/s (914kB/s)(8948KiB/10029msec) 00:26:11.831 slat (usec): min=4, max=8052, avg=22.27, stdev=225.11 00:26:11.831 clat (msec): min=25, max=151, avg=71.53, stdev=19.00 00:26:11.831 lat (msec): min=25, max=151, avg=71.55, stdev=19.00 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:26:11.831 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:26:11.831 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:26:11.831 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 153], 00:26:11.831 | 99.99th=[ 153] 00:26:11.831 bw ( KiB/s): min= 728, max= 1072, per=4.05%, avg=888.40, stdev=111.14, samples=20 00:26:11.831 iops : min= 182, max= 268, avg=222.10, stdev=27.78, samples=20 00:26:11.831 lat (msec) : 50=14.22%, 100=78.90%, 250=6.88% 00:26:11.831 cpu : usr=34.94%, sys=0.67%, ctx=950, majf=0, minf=9 00:26:11.831 IO depths : 1=1.5%, 2=3.4%, 4=11.5%, 8=71.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102034: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=238, BW=953KiB/s (976kB/s)(9592KiB/10065msec) 00:26:11.831 slat (usec): min=3, max=8084, avg=32.60, stdev=349.29 00:26:11.831 clat (msec): min=8, max=133, avg=66.81, stdev=21.01 00:26:11.831 lat (msec): min=11, max=133, avg=66.84, stdev=21.03 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:26:11.831 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:11.831 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 104], 00:26:11.831 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 134], 99.95th=[ 134], 00:26:11.831 | 99.99th=[ 134] 00:26:11.831 bw ( KiB/s): min= 728, max= 1264, per=4.34%, avg=952.70, stdev=157.45, samples=20 00:26:11.831 iops : min= 182, max= 316, avg=238.15, stdev=39.37, samples=20 00:26:11.831 lat (msec) : 10=0.04%, 20=1.75%, 50=21.73%, 100=70.23%, 250=6.26% 00:26:11.831 cpu : usr=37.53%, sys=0.54%, ctx=1059, majf=0, minf=9 00:26:11.831 IO depths : 1=0.8%, 2=2.0%, 4=9.3%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102035: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=239, BW=960KiB/s (983kB/s)(9640KiB/10045msec) 00:26:11.831 slat (usec): min=4, max=8103, avg=23.37, stdev=269.43 00:26:11.831 clat (msec): min=20, max=143, avg=66.46, stdev=19.06 00:26:11.831 lat (msec): min=20, max=143, avg=66.48, stdev=19.07 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 51], 00:26:11.831 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 70], 00:26:11.831 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 104], 00:26:11.831 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:26:11.831 | 99.99th=[ 144] 00:26:11.831 bw ( KiB/s): min= 768, max= 1282, per=4.36%, avg=957.55, stdev=141.72, samples=20 00:26:11.831 iops : min= 192, max= 320, avg=239.35, stdev=35.36, samples=20 00:26:11.831 lat (msec) : 50=19.00%, 100=74.98%, 250=6.02% 00:26:11.831 cpu : usr=42.97%, sys=0.87%, ctx=1232, majf=0, minf=9 00:26:11.831 IO depths : 1=1.5%, 2=3.6%, 4=12.6%, 8=70.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102036: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=207, BW=830KiB/s (850kB/s)(8324KiB/10026msec) 00:26:11.831 slat (usec): min=5, max=8070, avg=30.00, stdev=351.84 00:26:11.831 clat (msec): min=31, max=152, avg=76.87, stdev=20.03 00:26:11.831 lat (msec): min=31, max=152, avg=76.90, stdev=20.03 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:26:11.831 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:26:11.831 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:26:11.831 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 153], 99.95th=[ 153], 00:26:11.831 | 99.99th=[ 153] 00:26:11.831 bw ( KiB/s): min= 656, max= 944, per=3.75%, avg=824.42, stdev=80.24, samples=19 00:26:11.831 iops : min= 164, max= 236, avg=206.11, stdev=20.06, samples=19 00:26:11.831 lat (msec) : 50=8.46%, 100=74.77%, 250=16.77% 00:26:11.831 cpu : usr=34.42%, sys=0.47%, ctx=909, majf=0, minf=9 00:26:11.831 IO depths : 1=1.9%, 2=4.8%, 4=15.0%, 8=67.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102037: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=217, BW=872KiB/s (893kB/s)(8728KiB/10010msec) 00:26:11.831 slat (usec): min=7, max=7183, avg=25.72, stdev=240.24 00:26:11.831 clat (msec): min=32, max=151, avg=73.19, stdev=18.96 00:26:11.831 lat (msec): min=32, max=151, avg=73.21, stdev=18.97 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 60], 00:26:11.831 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 75], 00:26:11.831 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 106], 00:26:11.831 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:26:11.831 | 99.99th=[ 153] 00:26:11.831 bw ( KiB/s): min= 728, max= 1072, per=3.93%, avg=864.00, stdev=96.77, samples=19 00:26:11.831 iops : min= 182, max= 268, avg=216.00, stdev=24.19, samples=19 00:26:11.831 lat (msec) : 50=11.73%, 100=79.97%, 250=8.30% 00:26:11.831 cpu : usr=41.66%, sys=0.73%, ctx=1149, majf=0, minf=9 00:26:11.831 IO depths : 1=2.0%, 2=4.3%, 4=12.6%, 8=69.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename1: (groupid=0, jobs=1): err= 0: pid=102038: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=254, BW=1017KiB/s (1041kB/s)(9.97MiB/10041msec) 00:26:11.831 slat (usec): min=3, max=7969, avg=23.69, stdev=293.81 00:26:11.831 clat (msec): min=25, max=136, avg=62.70, stdev=20.52 00:26:11.831 lat (msec): min=25, max=136, avg=62.72, stdev=20.52 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 45], 00:26:11.831 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 65], 00:26:11.831 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 101], 00:26:11.831 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:26:11.831 | 99.99th=[ 138] 00:26:11.831 bw ( KiB/s): min= 736, max= 1256, per=4.64%, avg=1018.40, stdev=123.35, samples=20 00:26:11.831 iops : min= 184, max= 314, avg=254.55, stdev=30.84, samples=20 00:26:11.831 lat (msec) : 50=33.78%, 100=61.40%, 250=4.82% 00:26:11.831 cpu : usr=39.37%, sys=0.45%, ctx=1336, majf=0, minf=9 00:26:11.831 IO depths : 1=0.2%, 2=0.5%, 4=4.9%, 8=80.3%, 16=14.1%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=89.0%, 8=7.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename2: (groupid=0, jobs=1): err= 0: pid=102039: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=224, BW=897KiB/s (918kB/s)(9016KiB/10056msec) 00:26:11.831 slat (usec): min=3, max=8025, avg=22.80, stdev=280.95 00:26:11.831 clat (msec): min=13, max=170, avg=71.04, stdev=21.52 00:26:11.831 lat (msec): min=13, max=170, avg=71.06, stdev=21.52 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 19], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 57], 00:26:11.831 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:26:11.831 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 109], 00:26:11.831 | 99.00th=[ 125], 99.50th=[ 150], 99.90th=[ 171], 99.95th=[ 171], 00:26:11.831 | 99.99th=[ 171] 00:26:11.831 bw ( KiB/s): min= 640, max= 1082, per=4.08%, avg=895.20, stdev=110.41, samples=20 00:26:11.831 iops : min= 160, max= 270, avg=223.75, stdev=27.54, samples=20 00:26:11.831 lat (msec) : 20=1.02%, 50=16.68%, 100=73.56%, 250=8.74% 00:26:11.831 cpu : usr=32.90%, sys=0.37%, ctx=906, majf=0, minf=9 00:26:11.831 IO depths : 1=0.6%, 2=1.2%, 4=9.0%, 8=76.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:11.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.831 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.831 filename2: (groupid=0, jobs=1): err= 0: pid=102040: Sun Jul 14 18:42:17 2024 00:26:11.831 read: IOPS=272, BW=1090KiB/s (1116kB/s)(10.7MiB/10059msec) 00:26:11.831 slat (usec): min=4, max=8041, avg=17.39, stdev=216.80 00:26:11.831 clat (msec): min=3, max=130, avg=58.48, stdev=19.81 00:26:11.831 lat (msec): min=3, max=130, avg=58.50, stdev=19.81 00:26:11.831 clat percentiles (msec): 00:26:11.831 | 1.00th=[ 5], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 45], 00:26:11.831 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 00:26:11.831 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 94], 00:26:11.831 | 99.00th=[ 110], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 131], 00:26:11.831 | 99.99th=[ 131] 00:26:11.831 bw ( KiB/s): min= 816, max= 1664, per=4.96%, avg=1088.75, stdev=181.95, samples=20 00:26:11.831 iops : min= 204, max= 416, avg=272.15, stdev=45.50, samples=20 00:26:11.831 lat (msec) : 4=0.58%, 10=1.17%, 20=0.58%, 50=36.31%, 100=58.36% 00:26:11.831 lat (msec) : 250=2.99% 00:26:11.831 cpu : usr=38.52%, sys=0.71%, ctx=1114, majf=0, minf=0 00:26:11.831 IO depths : 1=0.2%, 2=0.5%, 4=6.8%, 8=78.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=89.1%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102041: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=217, BW=869KiB/s (890kB/s)(8708KiB/10017msec) 00:26:11.832 slat (usec): min=4, max=4014, avg=17.52, stdev=89.06 00:26:11.832 clat (msec): min=32, max=142, avg=73.49, stdev=19.52 00:26:11.832 lat (msec): min=32, max=142, avg=73.50, stdev=19.52 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:26:11.832 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 75], 00:26:11.832 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 111], 00:26:11.832 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:26:11.832 | 99.99th=[ 142] 00:26:11.832 bw ( KiB/s): min= 656, max= 1104, per=3.95%, avg=868.21, stdev=109.70, samples=19 00:26:11.832 iops : min= 164, max= 276, avg=217.05, stdev=27.43, samples=19 00:26:11.832 lat (msec) : 50=12.63%, 100=77.54%, 250=9.83% 00:26:11.832 cpu : usr=34.12%, sys=0.54%, ctx=991, majf=0, minf=9 00:26:11.832 IO depths : 1=1.3%, 2=3.5%, 4=11.5%, 8=71.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=90.8%, 8=4.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102042: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=207, BW=830KiB/s (850kB/s)(8328KiB/10029msec) 00:26:11.832 slat (usec): min=3, max=8034, avg=25.38, stdev=277.84 00:26:11.832 clat (msec): min=23, max=165, avg=76.93, stdev=23.65 00:26:11.832 lat (msec): min=23, max=165, avg=76.96, stdev=23.65 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 60], 00:26:11.832 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:26:11.832 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 125], 00:26:11.832 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 167], 00:26:11.832 | 99.99th=[ 167] 00:26:11.832 bw ( KiB/s): min= 600, max= 1232, per=3.76%, avg=826.40, stdev=139.12, samples=20 00:26:11.832 iops : min= 150, max= 308, avg=206.60, stdev=34.78, samples=20 00:26:11.832 lat (msec) : 50=12.92%, 100=75.60%, 250=11.48% 00:26:11.832 cpu : usr=37.48%, sys=0.62%, ctx=1106, majf=0, minf=9 00:26:11.832 IO depths : 1=2.4%, 2=5.0%, 4=14.3%, 8=67.4%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102043: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=227, BW=911KiB/s (932kB/s)(9136KiB/10034msec) 00:26:11.832 slat (usec): min=3, max=8046, avg=23.22, stdev=290.44 00:26:11.832 clat (msec): min=22, max=135, avg=70.05, stdev=19.66 00:26:11.832 lat (msec): min=22, max=135, avg=70.07, stdev=19.67 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:26:11.832 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:26:11.832 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:26:11.832 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 136], 00:26:11.832 | 99.99th=[ 136] 00:26:11.832 bw ( KiB/s): min= 640, max= 1120, per=4.13%, avg=907.20, stdev=116.06, samples=20 00:26:11.832 iops : min= 160, max= 280, avg=226.80, stdev=29.01, samples=20 00:26:11.832 lat (msec) : 50=14.67%, 100=77.63%, 250=7.71% 00:26:11.832 cpu : usr=34.93%, sys=0.47%, ctx=909, majf=0, minf=9 00:26:11.832 IO depths : 1=1.1%, 2=2.6%, 4=9.9%, 8=73.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102044: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=254, BW=1016KiB/s (1041kB/s)(9.97MiB/10048msec) 00:26:11.832 slat (usec): min=4, max=8020, avg=16.55, stdev=187.32 00:26:11.832 clat (msec): min=12, max=155, avg=62.73, stdev=22.98 00:26:11.832 lat (msec): min=12, max=155, avg=62.75, stdev=22.97 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 45], 00:26:11.832 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 66], 00:26:11.832 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 108], 00:26:11.832 | 99.00th=[ 127], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:26:11.832 | 99.99th=[ 157] 00:26:11.832 bw ( KiB/s): min= 640, max= 1352, per=4.64%, avg=1018.80, stdev=206.40, samples=20 00:26:11.832 iops : min= 160, max= 338, avg=254.70, stdev=51.60, samples=20 00:26:11.832 lat (msec) : 20=2.04%, 50=33.53%, 100=55.70%, 250=8.73% 00:26:11.832 cpu : usr=42.27%, sys=0.70%, ctx=1379, majf=0, minf=9 00:26:11.832 IO depths : 1=0.4%, 2=0.8%, 4=5.8%, 8=79.2%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102045: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=273, BW=1095KiB/s (1122kB/s)(10.8MiB/10068msec) 00:26:11.832 slat (usec): min=3, max=3738, avg=14.44, stdev=92.24 00:26:11.832 clat (msec): min=4, max=131, avg=58.25, stdev=17.86 00:26:11.832 lat (msec): min=5, max=131, avg=58.26, stdev=17.86 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 45], 00:26:11.832 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 00:26:11.832 | 70.00th=[ 66], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 92], 00:26:11.832 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 132], 99.95th=[ 132], 00:26:11.832 | 99.99th=[ 132] 00:26:11.832 bw ( KiB/s): min= 880, max= 1523, per=4.98%, avg=1094.95, stdev=146.88, samples=20 00:26:11.832 iops : min= 220, max= 380, avg=273.70, stdev=36.60, samples=20 00:26:11.832 lat (msec) : 10=1.16%, 20=0.58%, 50=35.80%, 100=60.75%, 250=1.70% 00:26:11.832 cpu : usr=47.40%, sys=0.80%, ctx=1268, majf=0, minf=0 00:26:11.832 IO depths : 1=1.1%, 2=2.5%, 4=10.0%, 8=74.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 filename2: (groupid=0, jobs=1): err= 0: pid=102046: Sun Jul 14 18:42:17 2024 00:26:11.832 read: IOPS=246, BW=984KiB/s (1008kB/s)(9912KiB/10069msec) 00:26:11.832 slat (usec): min=3, max=8029, avg=17.41, stdev=180.15 00:26:11.832 clat (msec): min=4, max=143, avg=64.92, stdev=22.71 00:26:11.832 lat (msec): min=4, max=143, avg=64.94, stdev=22.71 00:26:11.832 clat percentiles (msec): 00:26:11.832 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:11.832 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:26:11.832 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 106], 00:26:11.832 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:26:11.832 | 99.99th=[ 144] 00:26:11.832 bw ( KiB/s): min= 696, max= 1368, per=4.48%, avg=984.20, stdev=183.52, samples=20 00:26:11.832 iops : min= 174, max= 342, avg=246.00, stdev=45.85, samples=20 00:26:11.832 lat (msec) : 10=1.90%, 20=0.69%, 50=24.98%, 100=65.21%, 250=7.22% 00:26:11.832 cpu : usr=38.06%, sys=0.62%, ctx=1179, majf=0, minf=9 00:26:11.832 IO depths : 1=0.8%, 2=1.7%, 4=7.5%, 8=76.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:11.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.832 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:11.832 00:26:11.832 Run status group 0 (all jobs): 00:26:11.832 READ: bw=21.4MiB/s (22.5MB/s), 793KiB/s-1095KiB/s (812kB/s-1122kB/s), io=216MiB (226MB), run=10010-10069msec 00:26:11.832 18:42:17 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:11.832 18:42:17 -- target/dif.sh@43 -- # local sub 00:26:11.832 18:42:17 -- target/dif.sh@45 -- # for sub in "$@" 00:26:11.832 18:42:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:11.832 18:42:17 -- target/dif.sh@36 -- # local sub_id=0 00:26:11.832 18:42:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@45 -- # for sub in "$@" 00:26:11.832 18:42:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:11.832 18:42:17 -- target/dif.sh@36 -- # local sub_id=1 00:26:11.832 18:42:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@45 -- # for sub in "$@" 00:26:11.832 18:42:17 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:11.832 18:42:17 -- target/dif.sh@36 -- # local sub_id=2 00:26:11.832 18:42:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:11.832 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.832 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.832 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # numjobs=2 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # iodepth=8 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # runtime=5 00:26:11.832 18:42:17 -- target/dif.sh@115 -- # files=1 00:26:11.833 18:42:17 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:11.833 18:42:17 -- target/dif.sh@28 -- # local sub 00:26:11.833 18:42:17 -- target/dif.sh@30 -- # for sub in "$@" 00:26:11.833 18:42:17 -- target/dif.sh@31 -- # create_subsystem 0 00:26:11.833 18:42:17 -- target/dif.sh@18 -- # local sub_id=0 00:26:11.833 18:42:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 bdev_null0 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 [2024-07-14 18:42:17.475986] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@30 -- # for sub in "$@" 00:26:11.833 18:42:17 -- target/dif.sh@31 -- # create_subsystem 1 00:26:11.833 18:42:17 -- target/dif.sh@18 -- # local sub_id=1 00:26:11.833 18:42:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 bdev_null1 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.833 18:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.833 18:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.833 18:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.833 18:42:17 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:11.833 18:42:17 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:11.833 18:42:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:11.833 18:42:17 -- nvmf/common.sh@520 -- # config=() 00:26:11.833 18:42:17 -- nvmf/common.sh@520 -- # local subsystem config 00:26:11.833 18:42:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:11.833 18:42:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:11.833 { 00:26:11.833 "params": { 00:26:11.833 "name": "Nvme$subsystem", 00:26:11.833 "trtype": "$TEST_TRANSPORT", 00:26:11.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.833 "adrfam": "ipv4", 00:26:11.833 "trsvcid": "$NVMF_PORT", 00:26:11.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.833 "hdgst": ${hdgst:-false}, 00:26:11.833 "ddgst": ${ddgst:-false} 00:26:11.833 }, 00:26:11.833 "method": "bdev_nvme_attach_controller" 00:26:11.833 } 00:26:11.833 EOF 00:26:11.833 )") 00:26:11.833 18:42:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.833 18:42:17 -- target/dif.sh@82 -- # gen_fio_conf 00:26:11.833 18:42:17 -- target/dif.sh@54 -- # local file 00:26:11.833 18:42:17 -- target/dif.sh@56 -- # cat 00:26:11.833 18:42:17 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.833 18:42:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:11.833 18:42:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.833 18:42:17 -- nvmf/common.sh@542 -- # cat 00:26:11.833 18:42:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:11.833 18:42:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.833 18:42:17 -- common/autotest_common.sh@1320 -- # shift 00:26:11.833 18:42:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:11.833 18:42:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.833 18:42:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:11.833 18:42:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:11.833 { 00:26:11.833 "params": { 00:26:11.833 "name": "Nvme$subsystem", 00:26:11.833 "trtype": "$TEST_TRANSPORT", 00:26:11.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.833 "adrfam": "ipv4", 00:26:11.833 "trsvcid": "$NVMF_PORT", 00:26:11.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.833 "hdgst": ${hdgst:-false}, 00:26:11.833 "ddgst": ${ddgst:-false} 00:26:11.833 }, 00:26:11.833 "method": "bdev_nvme_attach_controller" 00:26:11.833 } 00:26:11.833 EOF 00:26:11.833 )") 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:11.833 18:42:17 -- nvmf/common.sh@542 -- # cat 00:26:11.833 18:42:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:11.833 18:42:17 -- target/dif.sh@72 -- # (( file <= files )) 00:26:11.833 18:42:17 -- target/dif.sh@73 -- # cat 00:26:11.833 18:42:17 -- nvmf/common.sh@544 -- # jq . 00:26:11.833 18:42:17 -- target/dif.sh@72 -- # (( file++ )) 00:26:11.833 18:42:17 -- target/dif.sh@72 -- # (( file <= files )) 00:26:11.833 18:42:17 -- nvmf/common.sh@545 -- # IFS=, 00:26:11.833 18:42:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:11.833 "params": { 00:26:11.833 "name": "Nvme0", 00:26:11.833 "trtype": "tcp", 00:26:11.833 "traddr": "10.0.0.2", 00:26:11.833 "adrfam": "ipv4", 00:26:11.833 "trsvcid": "4420", 00:26:11.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:11.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:11.833 "hdgst": false, 00:26:11.833 "ddgst": false 00:26:11.833 }, 00:26:11.833 "method": "bdev_nvme_attach_controller" 00:26:11.833 },{ 00:26:11.833 "params": { 00:26:11.833 "name": "Nvme1", 00:26:11.833 "trtype": "tcp", 00:26:11.833 "traddr": "10.0.0.2", 00:26:11.833 "adrfam": "ipv4", 00:26:11.833 "trsvcid": "4420", 00:26:11.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.833 "hdgst": false, 00:26:11.833 "ddgst": false 00:26:11.833 }, 00:26:11.833 "method": "bdev_nvme_attach_controller" 00:26:11.833 }' 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:11.833 18:42:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:11.833 18:42:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:11.833 18:42:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:11.833 18:42:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:11.833 18:42:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:11.833 18:42:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.833 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:11.833 ... 00:26:11.833 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:11.833 ... 00:26:11.833 fio-3.35 00:26:11.833 Starting 4 threads 00:26:11.833 [2024-07-14 18:42:18.214349] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:11.833 [2024-07-14 18:42:18.214416] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:16.034 00:26:16.034 filename0: (groupid=0, jobs=1): err= 0: pid=102178: Sun Jul 14 18:42:23 2024 00:26:16.034 read: IOPS=2031, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5002msec) 00:26:16.034 slat (nsec): min=6558, max=69686, avg=15725.42, stdev=5683.38 00:26:16.034 clat (usec): min=905, max=10580, avg=3861.42, stdev=363.54 00:26:16.034 lat (usec): min=915, max=10593, avg=3877.14, stdev=363.77 00:26:16.034 clat percentiles (usec): 00:26:16.034 | 1.00th=[ 3130], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3687], 00:26:16.034 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3916], 00:26:16.034 | 70.00th=[ 3982], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4293], 00:26:16.034 | 99.00th=[ 4555], 99.50th=[ 5407], 99.90th=[ 6915], 99.95th=[ 8586], 00:26:16.034 | 99.99th=[10552] 00:26:16.034 bw ( KiB/s): min=15488, max=16768, per=25.13%, avg=16366.11, stdev=416.96, samples=9 00:26:16.034 iops : min= 1936, max= 2096, avg=2045.67, stdev=52.12, samples=9 00:26:16.034 lat (usec) : 1000=0.01% 00:26:16.034 lat (msec) : 2=0.04%, 4=71.81%, 10=28.09%, 20=0.05% 00:26:16.034 cpu : usr=92.82%, sys=5.54%, ctx=12, majf=0, minf=0 00:26:16.034 IO depths : 1=10.3%, 2=25.0%, 4=50.0%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.034 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.034 issued rwts: total=10160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.034 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:16.034 filename0: (groupid=0, jobs=1): err= 0: pid=102179: Sun Jul 14 18:42:23 2024 00:26:16.034 read: IOPS=2040, BW=15.9MiB/s (16.7MB/s)(79.8MiB/5003msec) 00:26:16.034 slat (nsec): min=6491, max=63704, avg=8895.63, stdev=3674.49 00:26:16.034 clat (usec): min=1104, max=4916, avg=3875.95, stdev=291.87 00:26:16.035 lat (usec): min=1111, max=4950, avg=3884.85, stdev=292.00 00:26:16.035 clat percentiles (usec): 00:26:16.035 | 1.00th=[ 3163], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3720], 00:26:16.035 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3949], 00:26:16.035 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:26:16.035 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 4686], 99.95th=[ 4686], 00:26:16.035 | 99.99th=[ 4883] 00:26:16.035 bw ( KiB/s): min=15616, max=17072, per=25.27%, avg=16458.67, stdev=455.72, samples=9 00:26:16.035 iops : min= 1952, max= 2134, avg=2057.33, stdev=56.96, samples=9 00:26:16.035 lat (msec) : 2=0.25%, 4=66.41%, 10=33.34% 00:26:16.035 cpu : usr=93.20%, sys=5.18%, ctx=29, majf=0, minf=0 00:26:16.035 IO depths : 1=9.7%, 2=24.2%, 4=50.8%, 8=15.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 issued rwts: total=10210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:16.035 filename1: (groupid=0, jobs=1): err= 0: pid=102180: Sun Jul 14 18:42:23 2024 00:26:16.035 read: IOPS=2034, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5001msec) 00:26:16.035 slat (nsec): min=6720, max=82336, avg=16024.71, stdev=5425.24 00:26:16.035 clat (usec): min=2330, max=5259, avg=3853.10, stdev=274.34 00:26:16.035 lat (usec): min=2339, max=5273, avg=3869.12, stdev=274.62 00:26:16.035 clat percentiles (usec): 00:26:16.035 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3687], 00:26:16.035 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3916], 00:26:16.035 | 70.00th=[ 3982], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4293], 00:26:16.035 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 4948], 99.95th=[ 5145], 00:26:16.035 | 99.99th=[ 5211] 00:26:16.035 bw ( KiB/s): min=15488, max=17024, per=25.18%, avg=16398.22, stdev=454.56, samples=9 00:26:16.035 iops : min= 1936, max= 2128, avg=2049.78, stdev=56.82, samples=9 00:26:16.035 lat (msec) : 4=72.09%, 10=27.91% 00:26:16.035 cpu : usr=92.78%, sys=5.66%, ctx=12, majf=0, minf=0 00:26:16.035 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:16.035 filename1: (groupid=0, jobs=1): err= 0: pid=102181: Sun Jul 14 18:42:23 2024 00:26:16.035 read: IOPS=2034, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5002msec) 00:26:16.035 slat (usec): min=6, max=121, avg=14.02, stdev= 6.28 00:26:16.035 clat (usec): min=2284, max=6762, avg=3868.60, stdev=277.88 00:26:16.035 lat (usec): min=2296, max=6778, avg=3882.62, stdev=277.74 00:26:16.035 clat percentiles (usec): 00:26:16.035 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3687], 00:26:16.035 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:26:16.035 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:26:16.035 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5145], 99.95th=[ 5276], 00:26:16.035 | 99.99th=[ 5669] 00:26:16.035 bw ( KiB/s): min=15488, max=16896, per=25.18%, avg=16394.56, stdev=463.74, samples=9 00:26:16.035 iops : min= 1936, max= 2112, avg=2049.22, stdev=57.98, samples=9 00:26:16.035 lat (msec) : 4=69.50%, 10=30.50% 00:26:16.035 cpu : usr=92.90%, sys=5.28%, ctx=33, majf=0, minf=9 00:26:16.035 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.035 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:16.035 00:26:16.035 Run status group 0 (all jobs): 00:26:16.035 READ: bw=63.6MiB/s (66.7MB/s), 15.9MiB/s-15.9MiB/s (16.6MB/s-16.7MB/s), io=318MiB (334MB), run=5001-5003msec 00:26:16.294 18:42:23 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:16.294 18:42:23 -- target/dif.sh@43 -- # local sub 00:26:16.294 18:42:23 -- target/dif.sh@45 -- # for sub in "$@" 00:26:16.294 18:42:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:16.294 18:42:23 -- target/dif.sh@36 -- # local sub_id=0 00:26:16.294 18:42:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@45 -- # for sub in "$@" 00:26:16.294 18:42:23 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:16.294 18:42:23 -- target/dif.sh@36 -- # local sub_id=1 00:26:16.294 18:42:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 00:26:16.294 real 0m23.698s 00:26:16.294 user 2m7.277s 00:26:16.294 sys 0m4.136s 00:26:16.294 18:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 ************************************ 00:26:16.294 END TEST fio_dif_rand_params 00:26:16.294 ************************************ 00:26:16.294 18:42:23 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:16.294 18:42:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:16.294 18:42:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 ************************************ 00:26:16.294 START TEST fio_dif_digest 00:26:16.294 ************************************ 00:26:16.294 18:42:23 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:16.294 18:42:23 -- target/dif.sh@123 -- # local NULL_DIF 00:26:16.294 18:42:23 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:16.294 18:42:23 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:16.294 18:42:23 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:16.294 18:42:23 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:16.294 18:42:23 -- target/dif.sh@127 -- # numjobs=3 00:26:16.294 18:42:23 -- target/dif.sh@127 -- # iodepth=3 00:26:16.294 18:42:23 -- target/dif.sh@127 -- # runtime=10 00:26:16.294 18:42:23 -- target/dif.sh@128 -- # hdgst=true 00:26:16.294 18:42:23 -- target/dif.sh@128 -- # ddgst=true 00:26:16.294 18:42:23 -- target/dif.sh@130 -- # create_subsystems 0 00:26:16.294 18:42:23 -- target/dif.sh@28 -- # local sub 00:26:16.294 18:42:23 -- target/dif.sh@30 -- # for sub in "$@" 00:26:16.294 18:42:23 -- target/dif.sh@31 -- # create_subsystem 0 00:26:16.294 18:42:23 -- target/dif.sh@18 -- # local sub_id=0 00:26:16.294 18:42:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 bdev_null0 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:16.294 18:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.294 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 [2024-07-14 18:42:23.688208] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.294 18:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.294 18:42:23 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:16.294 18:42:23 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:16.294 18:42:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:16.294 18:42:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:16.294 18:42:23 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:16.294 18:42:23 -- nvmf/common.sh@520 -- # config=() 00:26:16.294 18:42:23 -- target/dif.sh@82 -- # gen_fio_conf 00:26:16.294 18:42:23 -- nvmf/common.sh@520 -- # local subsystem config 00:26:16.294 18:42:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:16.294 18:42:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:16.294 18:42:23 -- target/dif.sh@54 -- # local file 00:26:16.294 18:42:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:16.294 { 00:26:16.294 "params": { 00:26:16.294 "name": "Nvme$subsystem", 00:26:16.294 "trtype": "$TEST_TRANSPORT", 00:26:16.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.294 "adrfam": "ipv4", 00:26:16.294 "trsvcid": "$NVMF_PORT", 00:26:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.294 "hdgst": ${hdgst:-false}, 00:26:16.294 "ddgst": ${ddgst:-false} 00:26:16.294 }, 00:26:16.294 "method": "bdev_nvme_attach_controller" 00:26:16.294 } 00:26:16.294 EOF 00:26:16.294 )") 00:26:16.294 18:42:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:16.294 18:42:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:16.294 18:42:23 -- target/dif.sh@56 -- # cat 00:26:16.294 18:42:23 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:16.294 18:42:23 -- common/autotest_common.sh@1320 -- # shift 00:26:16.294 18:42:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:16.294 18:42:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.294 18:42:23 -- nvmf/common.sh@542 -- # cat 00:26:16.294 18:42:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:16.294 18:42:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:16.294 18:42:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:16.294 18:42:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:16.294 18:42:23 -- target/dif.sh@72 -- # (( file <= files )) 00:26:16.294 18:42:23 -- nvmf/common.sh@544 -- # jq . 00:26:16.294 18:42:23 -- nvmf/common.sh@545 -- # IFS=, 00:26:16.294 18:42:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:16.295 "params": { 00:26:16.295 "name": "Nvme0", 00:26:16.295 "trtype": "tcp", 00:26:16.295 "traddr": "10.0.0.2", 00:26:16.295 "adrfam": "ipv4", 00:26:16.295 "trsvcid": "4420", 00:26:16.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:16.295 "hdgst": true, 00:26:16.295 "ddgst": true 00:26:16.295 }, 00:26:16.295 "method": "bdev_nvme_attach_controller" 00:26:16.295 }' 00:26:16.552 18:42:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:16.552 18:42:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:16.552 18:42:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.552 18:42:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:16.552 18:42:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:16.552 18:42:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:16.552 18:42:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:16.552 18:42:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:16.552 18:42:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:16.552 18:42:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:16.552 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:16.552 ... 00:26:16.552 fio-3.35 00:26:16.552 Starting 3 threads 00:26:17.119 [2024-07-14 18:42:24.268232] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:17.119 [2024-07-14 18:42:24.268307] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:27.128 00:26:27.128 filename0: (groupid=0, jobs=1): err= 0: pid=102287: Sun Jul 14 18:42:34 2024 00:26:27.128 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(249MiB/10005msec) 00:26:27.128 slat (usec): min=6, max=113, avg=16.76, stdev= 8.18 00:26:27.128 clat (usec): min=8421, max=23784, avg=15058.90, stdev=2755.49 00:26:27.128 lat (usec): min=8440, max=23799, avg=15075.66, stdev=2754.97 00:26:27.128 clat percentiles (usec): 00:26:27.128 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[12256], 00:26:27.128 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:26:27.128 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:26:27.128 | 99.00th=[19268], 99.50th=[20841], 99.90th=[23725], 99.95th=[23725], 00:26:27.128 | 99.99th=[23725] 00:26:27.128 bw ( KiB/s): min=20777, max=27904, per=30.71%, avg=25197.95, stdev=1699.26, samples=19 00:26:27.128 iops : min= 162, max= 218, avg=196.84, stdev=13.32, samples=19 00:26:27.128 lat (msec) : 10=8.59%, 20=90.80%, 50=0.60% 00:26:27.128 cpu : usr=94.26%, sys=4.18%, ctx=14, majf=0, minf=9 00:26:27.128 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:27.128 filename0: (groupid=0, jobs=1): err= 0: pid=102288: Sun Jul 14 18:42:34 2024 00:26:27.128 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(278MiB/10005msec) 00:26:27.128 slat (usec): min=6, max=114, avg=21.19, stdev=11.29 00:26:27.128 clat (usec): min=6687, max=23647, avg=13455.29, stdev=2756.18 00:26:27.128 lat (usec): min=6705, max=23675, avg=13476.48, stdev=2757.49 00:26:27.128 clat percentiles (usec): 00:26:27.128 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[10683], 00:26:27.128 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14091], 60.00th=[14484], 00:26:27.128 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16319], 95.00th=[16909], 00:26:27.128 | 99.00th=[18220], 99.50th=[19530], 99.90th=[22152], 99.95th=[22152], 00:26:27.128 | 99.99th=[23725] 00:26:27.128 bw ( KiB/s): min=23040, max=32256, per=34.27%, avg=28119.58, stdev=2267.79, samples=19 00:26:27.128 iops : min= 180, max= 252, avg=219.68, stdev=17.72, samples=19 00:26:27.128 lat (msec) : 10=17.83%, 20=81.72%, 50=0.45% 00:26:27.128 cpu : usr=92.58%, sys=5.35%, ctx=87, majf=0, minf=9 00:26:27.128 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:27.128 filename0: (groupid=0, jobs=1): err= 0: pid=102289: Sun Jul 14 18:42:34 2024 00:26:27.128 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10007msec) 00:26:27.128 slat (nsec): min=6616, max=90794, avg=16058.90, stdev=8375.54 00:26:27.128 clat (usec): min=8659, max=54943, avg=13631.85, stdev=8690.89 00:26:27.128 lat (usec): min=8670, max=54956, avg=13647.91, stdev=8690.84 00:26:27.128 clat percentiles (usec): 00:26:27.128 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:26:27.128 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:26:27.128 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13435], 95.00th=[17433], 00:26:27.128 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:26:27.128 | 99.99th=[54789] 00:26:27.128 bw ( KiB/s): min=21248, max=33024, per=34.27%, avg=28121.60, stdev=3183.68, samples=20 00:26:27.128 iops : min= 166, max= 258, avg=219.70, stdev=24.87, samples=20 00:26:27.128 lat (msec) : 10=5.23%, 20=90.00%, 50=0.05%, 100=4.73% 00:26:27.128 cpu : usr=94.63%, sys=3.84%, ctx=9, majf=0, minf=0 00:26:27.128 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.128 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:27.128 00:26:27.128 Run status group 0 (all jobs): 00:26:27.128 READ: bw=80.1MiB/s (84.0MB/s), 24.9MiB/s-27.8MiB/s (26.1MB/s-29.2MB/s), io=802MiB (841MB), run=10005-10007msec 00:26:27.387 18:42:34 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:27.388 18:42:34 -- target/dif.sh@43 -- # local sub 00:26:27.388 18:42:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.388 18:42:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:27.388 18:42:34 -- target/dif.sh@36 -- # local sub_id=0 00:26:27.388 18:42:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.388 18:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.388 18:42:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.388 18:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.388 18:42:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:27.388 18:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.388 18:42:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.388 ************************************ 00:26:27.388 END TEST fio_dif_digest 00:26:27.388 ************************************ 00:26:27.388 18:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.388 00:26:27.388 real 0m10.968s 00:26:27.388 user 0m28.750s 00:26:27.388 sys 0m1.629s 00:26:27.388 18:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:27.388 18:42:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.388 18:42:34 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:27.388 18:42:34 -- target/dif.sh@147 -- # nvmftestfini 00:26:27.388 18:42:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:27.388 18:42:34 -- nvmf/common.sh@116 -- # sync 00:26:27.388 18:42:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:27.388 18:42:34 -- nvmf/common.sh@119 -- # set +e 00:26:27.388 18:42:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:27.388 18:42:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:27.388 rmmod nvme_tcp 00:26:27.388 rmmod nvme_fabrics 00:26:27.388 rmmod nvme_keyring 00:26:27.388 18:42:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:27.388 18:42:34 -- nvmf/common.sh@123 -- # set -e 00:26:27.388 18:42:34 -- nvmf/common.sh@124 -- # return 0 00:26:27.388 18:42:34 -- nvmf/common.sh@477 -- # '[' -n 101529 ']' 00:26:27.388 18:42:34 -- nvmf/common.sh@478 -- # killprocess 101529 00:26:27.388 18:42:34 -- common/autotest_common.sh@926 -- # '[' -z 101529 ']' 00:26:27.388 18:42:34 -- common/autotest_common.sh@930 -- # kill -0 101529 00:26:27.388 18:42:34 -- common/autotest_common.sh@931 -- # uname 00:26:27.388 18:42:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.388 18:42:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101529 00:26:27.388 killing process with pid 101529 00:26:27.388 18:42:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:27.388 18:42:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:27.388 18:42:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101529' 00:26:27.388 18:42:34 -- common/autotest_common.sh@945 -- # kill 101529 00:26:27.388 18:42:34 -- common/autotest_common.sh@950 -- # wait 101529 00:26:27.647 18:42:34 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:27.647 18:42:34 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:27.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:28.163 Waiting for block devices as requested 00:26:28.163 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:28.163 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:28.163 18:42:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:28.163 18:42:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:28.163 18:42:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.163 18:42:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:28.163 18:42:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.163 18:42:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:28.163 18:42:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.163 18:42:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:28.163 00:26:28.163 real 0m59.813s 00:26:28.163 user 3m52.310s 00:26:28.163 sys 0m13.842s 00:26:28.163 18:42:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.163 18:42:35 -- common/autotest_common.sh@10 -- # set +x 00:26:28.163 ************************************ 00:26:28.163 END TEST nvmf_dif 00:26:28.163 ************************************ 00:26:28.421 18:42:35 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:28.421 18:42:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:28.421 18:42:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.421 18:42:35 -- common/autotest_common.sh@10 -- # set +x 00:26:28.421 ************************************ 00:26:28.421 START TEST nvmf_abort_qd_sizes 00:26:28.421 ************************************ 00:26:28.421 18:42:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:28.421 * Looking for test storage... 00:26:28.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:28.421 18:42:35 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.421 18:42:35 -- nvmf/common.sh@7 -- # uname -s 00:26:28.421 18:42:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.421 18:42:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.421 18:42:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.421 18:42:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.421 18:42:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.421 18:42:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.421 18:42:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.421 18:42:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.421 18:42:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.421 18:42:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.421 18:42:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db 00:26:28.421 18:42:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=42162aed-0e24-4758-911b-86aefe0815db 00:26:28.421 18:42:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.422 18:42:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.422 18:42:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.422 18:42:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.422 18:42:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.422 18:42:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.422 18:42:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.422 18:42:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.422 18:42:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.422 18:42:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.422 18:42:35 -- paths/export.sh@5 -- # export PATH 00:26:28.422 18:42:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.422 18:42:35 -- nvmf/common.sh@46 -- # : 0 00:26:28.422 18:42:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:28.422 18:42:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:28.422 18:42:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:28.422 18:42:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.422 18:42:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.422 18:42:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:28.422 18:42:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:28.422 18:42:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:28.422 18:42:35 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:28.422 18:42:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:28.422 18:42:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.422 18:42:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:28.422 18:42:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:28.422 18:42:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:28.422 18:42:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.422 18:42:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:28.422 18:42:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.422 18:42:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:28.422 18:42:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:28.422 18:42:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:28.422 18:42:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:28.422 18:42:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:28.422 18:42:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:28.422 18:42:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.422 18:42:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.422 18:42:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:28.422 18:42:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:28.422 18:42:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:28.422 18:42:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:28.422 18:42:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:28.422 18:42:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.422 18:42:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:28.422 18:42:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:28.422 18:42:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:28.422 18:42:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:28.422 18:42:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:28.422 18:42:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:28.422 Cannot find device "nvmf_tgt_br" 00:26:28.422 18:42:35 -- nvmf/common.sh@154 -- # true 00:26:28.422 18:42:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.422 Cannot find device "nvmf_tgt_br2" 00:26:28.422 18:42:35 -- nvmf/common.sh@155 -- # true 00:26:28.422 18:42:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:28.422 18:42:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:28.422 Cannot find device "nvmf_tgt_br" 00:26:28.422 18:42:35 -- nvmf/common.sh@157 -- # true 00:26:28.422 18:42:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:28.422 Cannot find device "nvmf_tgt_br2" 00:26:28.422 18:42:35 -- nvmf/common.sh@158 -- # true 00:26:28.422 18:42:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:28.422 18:42:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:28.422 18:42:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.422 18:42:35 -- nvmf/common.sh@161 -- # true 00:26:28.422 18:42:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.681 18:42:35 -- nvmf/common.sh@162 -- # true 00:26:28.681 18:42:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.681 18:42:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.681 18:42:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.681 18:42:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:28.681 18:42:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:28.681 18:42:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:28.681 18:42:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:28.681 18:42:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:28.681 18:42:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:28.681 18:42:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:28.681 18:42:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:28.681 18:42:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:28.681 18:42:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:28.681 18:42:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.681 18:42:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.681 18:42:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.681 18:42:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:28.681 18:42:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:28.681 18:42:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.681 18:42:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.681 18:42:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.681 18:42:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.681 18:42:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.681 18:42:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:28.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:28.681 00:26:28.681 --- 10.0.0.2 ping statistics --- 00:26:28.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.681 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:28.681 18:42:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:28.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:26:28.681 00:26:28.681 --- 10.0.0.3 ping statistics --- 00:26:28.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.681 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:28.681 18:42:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:26:28.681 00:26:28.681 --- 10.0.0.1 ping statistics --- 00:26:28.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.681 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:26:28.681 18:42:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.681 18:42:36 -- nvmf/common.sh@421 -- # return 0 00:26:28.681 18:42:36 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:28.681 18:42:36 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:29.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:29.505 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:29.505 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:29.505 18:42:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.505 18:42:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:29.505 18:42:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:29.505 18:42:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.505 18:42:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:29.505 18:42:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:29.505 18:42:36 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:29.505 18:42:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:29.505 18:42:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:29.505 18:42:36 -- common/autotest_common.sh@10 -- # set +x 00:26:29.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.505 18:42:36 -- nvmf/common.sh@469 -- # nvmfpid=102884 00:26:29.505 18:42:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:29.505 18:42:36 -- nvmf/common.sh@470 -- # waitforlisten 102884 00:26:29.505 18:42:36 -- common/autotest_common.sh@819 -- # '[' -z 102884 ']' 00:26:29.505 18:42:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.505 18:42:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:29.505 18:42:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.505 18:42:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:29.505 18:42:36 -- common/autotest_common.sh@10 -- # set +x 00:26:29.762 [2024-07-14 18:42:36.944411] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:29.762 [2024-07-14 18:42:36.944690] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.762 [2024-07-14 18:42:37.088201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.762 [2024-07-14 18:42:37.173036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:29.762 [2024-07-14 18:42:37.173385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.762 [2024-07-14 18:42:37.173577] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.762 [2024-07-14 18:42:37.173743] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.762 [2024-07-14 18:42:37.174005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.762 [2024-07-14 18:42:37.174084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.763 [2024-07-14 18:42:37.174156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.763 [2024-07-14 18:42:37.174157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.696 18:42:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:30.696 18:42:37 -- common/autotest_common.sh@852 -- # return 0 00:26:30.696 18:42:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:30.696 18:42:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:30.696 18:42:37 -- common/autotest_common.sh@10 -- # set +x 00:26:30.696 18:42:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.696 18:42:37 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:30.696 18:42:37 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:30.696 18:42:37 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:30.696 18:42:37 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:30.696 18:42:37 -- scripts/common.sh@312 -- # local nvmes 00:26:30.696 18:42:37 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:30.696 18:42:37 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:30.696 18:42:37 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:30.696 18:42:37 -- scripts/common.sh@297 -- # local bdf= 00:26:30.696 18:42:37 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:30.696 18:42:37 -- scripts/common.sh@232 -- # local class 00:26:30.696 18:42:37 -- scripts/common.sh@233 -- # local subclass 00:26:30.696 18:42:37 -- scripts/common.sh@234 -- # local progif 00:26:30.696 18:42:37 -- scripts/common.sh@235 -- # printf %02x 1 00:26:30.696 18:42:37 -- scripts/common.sh@235 -- # class=01 00:26:30.696 18:42:37 -- scripts/common.sh@236 -- # printf %02x 8 00:26:30.696 18:42:37 -- scripts/common.sh@236 -- # subclass=08 00:26:30.696 18:42:37 -- scripts/common.sh@237 -- # printf %02x 2 00:26:30.696 18:42:37 -- scripts/common.sh@237 -- # progif=02 00:26:30.696 18:42:37 -- scripts/common.sh@239 -- # hash lspci 00:26:30.696 18:42:37 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:30.696 18:42:37 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:30.696 18:42:37 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:30.696 18:42:37 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:30.696 18:42:37 -- scripts/common.sh@244 -- # tr -d '"' 00:26:30.697 18:42:37 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:30.697 18:42:37 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:30.697 18:42:37 -- scripts/common.sh@15 -- # local i 00:26:30.697 18:42:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:30.697 18:42:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:30.697 18:42:38 -- scripts/common.sh@24 -- # return 0 00:26:30.697 18:42:38 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:30.697 18:42:38 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:30.697 18:42:38 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:30.697 18:42:38 -- scripts/common.sh@15 -- # local i 00:26:30.697 18:42:38 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:30.697 18:42:38 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:30.697 18:42:38 -- scripts/common.sh@24 -- # return 0 00:26:30.697 18:42:38 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:30.697 18:42:38 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:30.697 18:42:38 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:30.697 18:42:38 -- scripts/common.sh@322 -- # uname -s 00:26:30.697 18:42:38 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:30.697 18:42:38 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:30.697 18:42:38 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:30.697 18:42:38 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:30.697 18:42:38 -- scripts/common.sh@322 -- # uname -s 00:26:30.697 18:42:38 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:30.697 18:42:38 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:30.697 18:42:38 -- scripts/common.sh@327 -- # (( 2 )) 00:26:30.697 18:42:38 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:30.697 18:42:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:30.697 18:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:30.697 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.697 ************************************ 00:26:30.697 START TEST spdk_target_abort 00:26:30.697 ************************************ 00:26:30.697 18:42:38 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:30.697 18:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.697 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.697 spdk_targetn1 00:26:30.697 18:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.697 18:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.697 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.697 [2024-07-14 18:42:38.109412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.697 18:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.697 18:42:38 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:30.697 18:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.697 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.955 18:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:30.955 18:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.955 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.955 18:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:30.955 18:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.955 18:42:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.955 [2024-07-14 18:42:38.145593] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.955 18:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.955 18:42:38 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:34.236 Initializing NVMe Controllers 00:26:34.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:34.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:34.236 Initialization complete. Launching workers. 00:26:34.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10457, failed: 0 00:26:34.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1086, failed to submit 9371 00:26:34.237 success 757, unsuccess 329, failed 0 00:26:34.237 18:42:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:34.237 18:42:41 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:37.545 [2024-07-14 18:42:44.620640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702ff0 is same with the state(5) to be set 00:26:37.545 Initializing NVMe Controllers 00:26:37.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:37.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:37.546 Initialization complete. Launching workers. 00:26:37.546 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5980, failed: 0 00:26:37.546 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1249, failed to submit 4731 00:26:37.546 success 250, unsuccess 999, failed 0 00:26:37.546 18:42:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:37.546 18:42:44 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:40.870 Initializing NVMe Controllers 00:26:40.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:40.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:40.870 Initialization complete. Launching workers. 00:26:40.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29979, failed: 0 00:26:40.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2647, failed to submit 27332 00:26:40.870 success 379, unsuccess 2268, failed 0 00:26:40.870 18:42:47 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:40.870 18:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.870 18:42:47 -- common/autotest_common.sh@10 -- # set +x 00:26:40.870 18:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.870 18:42:47 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:40.870 18:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.870 18:42:47 -- common/autotest_common.sh@10 -- # set +x 00:26:41.128 18:42:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.128 18:42:48 -- target/abort_qd_sizes.sh@62 -- # killprocess 102884 00:26:41.128 18:42:48 -- common/autotest_common.sh@926 -- # '[' -z 102884 ']' 00:26:41.128 18:42:48 -- common/autotest_common.sh@930 -- # kill -0 102884 00:26:41.128 18:42:48 -- common/autotest_common.sh@931 -- # uname 00:26:41.128 18:42:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.129 18:42:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102884 00:26:41.129 18:42:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:41.129 18:42:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:41.129 18:42:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102884' 00:26:41.129 killing process with pid 102884 00:26:41.129 18:42:48 -- common/autotest_common.sh@945 -- # kill 102884 00:26:41.129 18:42:48 -- common/autotest_common.sh@950 -- # wait 102884 00:26:41.386 ************************************ 00:26:41.386 END TEST spdk_target_abort 00:26:41.386 ************************************ 00:26:41.386 00:26:41.386 real 0m10.655s 00:26:41.386 user 0m43.631s 00:26:41.386 sys 0m1.742s 00:26:41.386 18:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.386 18:42:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.386 18:42:48 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:41.386 18:42:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:41.386 18:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.386 18:42:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.386 ************************************ 00:26:41.386 START TEST kernel_target_abort 00:26:41.386 ************************************ 00:26:41.386 18:42:48 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:41.386 18:42:48 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:41.386 18:42:48 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:41.386 18:42:48 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:41.386 18:42:48 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:41.386 18:42:48 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:41.386 18:42:48 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:41.386 18:42:48 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:41.386 18:42:48 -- nvmf/common.sh@627 -- # local block nvme 00:26:41.386 18:42:48 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:41.386 18:42:48 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:41.386 18:42:48 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:41.386 18:42:48 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:41.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.952 Waiting for block devices as requested 00:26:41.952 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.952 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.952 18:42:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:41.952 18:42:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:41.952 18:42:49 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:41.952 18:42:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:41.952 18:42:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:41.952 No valid GPT data, bailing 00:26:41.952 18:42:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:41.952 18:42:49 -- scripts/common.sh@393 -- # pt= 00:26:41.952 18:42:49 -- scripts/common.sh@394 -- # return 1 00:26:41.952 18:42:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:41.952 18:42:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:41.952 18:42:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:41.952 18:42:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:41.952 18:42:49 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:41.952 18:42:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:42.211 No valid GPT data, bailing 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # pt= 00:26:42.211 18:42:49 -- scripts/common.sh@394 -- # return 1 00:26:42.211 18:42:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:42.211 18:42:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:42.211 18:42:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:42.211 18:42:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:42.211 18:42:49 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:42.211 18:42:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:42.211 No valid GPT data, bailing 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # pt= 00:26:42.211 18:42:49 -- scripts/common.sh@394 -- # return 1 00:26:42.211 18:42:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:42.211 18:42:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:42.211 18:42:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:42.211 18:42:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:42.211 18:42:49 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:42.211 18:42:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:42.211 No valid GPT data, bailing 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:42.211 18:42:49 -- scripts/common.sh@393 -- # pt= 00:26:42.211 18:42:49 -- scripts/common.sh@394 -- # return 1 00:26:42.211 18:42:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:42.211 18:42:49 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:42.211 18:42:49 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:42.211 18:42:49 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:42.211 18:42:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:42.211 18:42:49 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:42.211 18:42:49 -- nvmf/common.sh@654 -- # echo 1 00:26:42.211 18:42:49 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:42.211 18:42:49 -- nvmf/common.sh@656 -- # echo 1 00:26:42.211 18:42:49 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:42.211 18:42:49 -- nvmf/common.sh@663 -- # echo tcp 00:26:42.211 18:42:49 -- nvmf/common.sh@664 -- # echo 4420 00:26:42.211 18:42:49 -- nvmf/common.sh@665 -- # echo ipv4 00:26:42.211 18:42:49 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:42.211 18:42:49 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42162aed-0e24-4758-911b-86aefe0815db --hostid=42162aed-0e24-4758-911b-86aefe0815db -a 10.0.0.1 -t tcp -s 4420 00:26:42.211 00:26:42.211 Discovery Log Number of Records 2, Generation counter 2 00:26:42.211 =====Discovery Log Entry 0====== 00:26:42.211 trtype: tcp 00:26:42.211 adrfam: ipv4 00:26:42.211 subtype: current discovery subsystem 00:26:42.211 treq: not specified, sq flow control disable supported 00:26:42.211 portid: 1 00:26:42.211 trsvcid: 4420 00:26:42.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:42.211 traddr: 10.0.0.1 00:26:42.211 eflags: none 00:26:42.211 sectype: none 00:26:42.211 =====Discovery Log Entry 1====== 00:26:42.211 trtype: tcp 00:26:42.211 adrfam: ipv4 00:26:42.211 subtype: nvme subsystem 00:26:42.211 treq: not specified, sq flow control disable supported 00:26:42.211 portid: 1 00:26:42.211 trsvcid: 4420 00:26:42.211 subnqn: kernel_target 00:26:42.211 traddr: 10.0.0.1 00:26:42.211 eflags: none 00:26:42.211 sectype: none 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:42.211 18:42:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:45.496 Initializing NVMe Controllers 00:26:45.496 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:45.496 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:45.496 Initialization complete. Launching workers. 00:26:45.496 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30393, failed: 0 00:26:45.496 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30393, failed to submit 0 00:26:45.496 success 0, unsuccess 30393, failed 0 00:26:45.496 18:42:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:45.496 18:42:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:48.776 Initializing NVMe Controllers 00:26:48.776 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:48.776 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:48.776 Initialization complete. Launching workers. 00:26:48.776 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66884, failed: 0 00:26:48.776 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27464, failed to submit 39420 00:26:48.776 success 0, unsuccess 27464, failed 0 00:26:48.776 18:42:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:48.776 18:42:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:52.105 Initializing NVMe Controllers 00:26:52.105 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:52.105 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:52.105 Initialization complete. Launching workers. 00:26:52.105 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76607, failed: 0 00:26:52.105 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19106, failed to submit 57501 00:26:52.105 success 0, unsuccess 19106, failed 0 00:26:52.105 18:42:59 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:52.105 18:42:59 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:52.105 18:42:59 -- nvmf/common.sh@677 -- # echo 0 00:26:52.105 18:42:59 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:52.105 18:42:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:52.105 18:42:59 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:52.105 18:42:59 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:52.105 18:42:59 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:52.105 18:42:59 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:52.105 00:26:52.105 real 0m10.455s 00:26:52.105 user 0m5.116s 00:26:52.105 sys 0m2.636s 00:26:52.105 18:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.105 18:42:59 -- common/autotest_common.sh@10 -- # set +x 00:26:52.105 ************************************ 00:26:52.105 END TEST kernel_target_abort 00:26:52.105 ************************************ 00:26:52.105 18:42:59 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:52.105 18:42:59 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:52.105 18:42:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:52.105 18:42:59 -- nvmf/common.sh@116 -- # sync 00:26:52.105 18:42:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:52.105 18:42:59 -- nvmf/common.sh@119 -- # set +e 00:26:52.105 18:42:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:52.105 18:42:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:52.105 rmmod nvme_tcp 00:26:52.105 rmmod nvme_fabrics 00:26:52.105 rmmod nvme_keyring 00:26:52.105 18:42:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:52.105 18:42:59 -- nvmf/common.sh@123 -- # set -e 00:26:52.105 18:42:59 -- nvmf/common.sh@124 -- # return 0 00:26:52.105 18:42:59 -- nvmf/common.sh@477 -- # '[' -n 102884 ']' 00:26:52.105 18:42:59 -- nvmf/common.sh@478 -- # killprocess 102884 00:26:52.105 18:42:59 -- common/autotest_common.sh@926 -- # '[' -z 102884 ']' 00:26:52.105 18:42:59 -- common/autotest_common.sh@930 -- # kill -0 102884 00:26:52.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102884) - No such process 00:26:52.105 Process with pid 102884 is not found 00:26:52.105 18:42:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102884 is not found' 00:26:52.105 18:42:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:52.105 18:42:59 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:52.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:52.672 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:52.672 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:52.672 18:43:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:52.672 18:43:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:52.672 18:43:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.672 18:43:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:52.672 18:43:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.672 18:43:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:52.672 18:43:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.672 18:43:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:52.931 ************************************ 00:26:52.931 END TEST nvmf_abort_qd_sizes 00:26:52.931 ************************************ 00:26:52.931 00:26:52.931 real 0m24.483s 00:26:52.931 user 0m50.065s 00:26:52.931 sys 0m5.667s 00:26:52.931 18:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.931 18:43:00 -- common/autotest_common.sh@10 -- # set +x 00:26:52.931 18:43:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:52.931 18:43:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:52.931 18:43:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:52.931 18:43:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:52.931 18:43:00 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:52.931 18:43:00 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:52.931 18:43:00 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:52.931 18:43:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:52.931 18:43:00 -- common/autotest_common.sh@10 -- # set +x 00:26:52.931 18:43:00 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:52.931 18:43:00 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:52.931 18:43:00 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:52.931 18:43:00 -- common/autotest_common.sh@10 -- # set +x 00:26:54.833 INFO: APP EXITING 00:26:54.833 INFO: killing all VMs 00:26:54.834 INFO: killing vhost app 00:26:54.834 INFO: EXIT DONE 00:26:55.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:55.092 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:55.092 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:56.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.025 Cleaning 00:26:56.025 Removing: /var/run/dpdk/spdk0/config 00:26:56.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:56.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:56.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:56.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:56.025 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:56.025 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:56.025 Removing: /var/run/dpdk/spdk1/config 00:26:56.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:56.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:56.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:56.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:56.025 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:56.025 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:56.025 Removing: /var/run/dpdk/spdk2/config 00:26:56.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:56.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:56.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:56.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:56.025 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:56.025 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:56.025 Removing: /var/run/dpdk/spdk3/config 00:26:56.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:56.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:56.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:56.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:56.025 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:56.025 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:56.025 Removing: /var/run/dpdk/spdk4/config 00:26:56.025 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:56.025 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:56.025 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:56.025 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:56.025 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:56.025 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:56.025 Removing: /dev/shm/nvmf_trace.0 00:26:56.025 Removing: /dev/shm/spdk_tgt_trace.pid67650 00:26:56.025 Removing: /var/run/dpdk/spdk0 00:26:56.025 Removing: /var/run/dpdk/spdk1 00:26:56.025 Removing: /var/run/dpdk/spdk2 00:26:56.025 Removing: /var/run/dpdk/spdk3 00:26:56.025 Removing: /var/run/dpdk/spdk4 00:26:56.025 Removing: /var/run/dpdk/spdk_pid100107 00:26:56.025 Removing: /var/run/dpdk/spdk_pid100398 00:26:56.025 Removing: /var/run/dpdk/spdk_pid100698 00:26:56.025 Removing: /var/run/dpdk/spdk_pid101234 00:26:56.025 Removing: /var/run/dpdk/spdk_pid101243 00:26:56.025 Removing: /var/run/dpdk/spdk_pid101605 00:26:56.025 Removing: /var/run/dpdk/spdk_pid101764 00:26:56.025 Removing: /var/run/dpdk/spdk_pid101921 00:26:56.025 Removing: /var/run/dpdk/spdk_pid102018 00:26:56.025 Removing: /var/run/dpdk/spdk_pid102174 00:26:56.025 Removing: /var/run/dpdk/spdk_pid102283 00:26:56.025 Removing: /var/run/dpdk/spdk_pid102953 00:26:56.025 Removing: /var/run/dpdk/spdk_pid102988 00:26:56.025 Removing: /var/run/dpdk/spdk_pid103022 00:26:56.025 Removing: /var/run/dpdk/spdk_pid103267 00:26:56.025 Removing: /var/run/dpdk/spdk_pid103303 00:26:56.025 Removing: /var/run/dpdk/spdk_pid103333 00:26:56.025 Removing: /var/run/dpdk/spdk_pid67501 00:26:56.025 Removing: /var/run/dpdk/spdk_pid67650 00:26:56.025 Removing: /var/run/dpdk/spdk_pid67951 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68220 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68395 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68476 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68567 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68650 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68694 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68724 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68785 00:26:56.025 Removing: /var/run/dpdk/spdk_pid68902 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69525 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69584 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69648 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69676 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69755 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69783 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69864 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69892 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69949 00:26:56.025 Removing: /var/run/dpdk/spdk_pid69979 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70025 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70055 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70207 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70237 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70311 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70380 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70407 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70465 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70489 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70519 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70539 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70573 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70593 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70627 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70647 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70676 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70701 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70730 00:26:56.025 Removing: /var/run/dpdk/spdk_pid70749 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70784 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70799 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70839 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70853 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70893 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70907 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70946 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70961 00:26:56.282 Removing: /var/run/dpdk/spdk_pid70996 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71015 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71050 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71069 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71098 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71118 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71152 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71172 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71206 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71226 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71255 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71280 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71309 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71337 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71369 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71396 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71429 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71449 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71483 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71503 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71538 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71602 00:26:56.282 Removing: /var/run/dpdk/spdk_pid71712 00:26:56.282 Removing: /var/run/dpdk/spdk_pid72127 00:26:56.282 Removing: /var/run/dpdk/spdk_pid78829 00:26:56.282 Removing: /var/run/dpdk/spdk_pid79170 00:26:56.282 Removing: /var/run/dpdk/spdk_pid81595 00:26:56.282 Removing: /var/run/dpdk/spdk_pid81967 00:26:56.282 Removing: /var/run/dpdk/spdk_pid82224 00:26:56.282 Removing: /var/run/dpdk/spdk_pid82275 00:26:56.282 Removing: /var/run/dpdk/spdk_pid82583 00:26:56.282 Removing: /var/run/dpdk/spdk_pid82632 00:26:56.282 Removing: /var/run/dpdk/spdk_pid82999 00:26:56.282 Removing: /var/run/dpdk/spdk_pid83523 00:26:56.282 Removing: /var/run/dpdk/spdk_pid83941 00:26:56.282 Removing: /var/run/dpdk/spdk_pid84897 00:26:56.282 Removing: /var/run/dpdk/spdk_pid85873 00:26:56.282 Removing: /var/run/dpdk/spdk_pid85984 00:26:56.282 Removing: /var/run/dpdk/spdk_pid86052 00:26:56.282 Removing: /var/run/dpdk/spdk_pid87511 00:26:56.282 Removing: /var/run/dpdk/spdk_pid87746 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88183 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88293 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88441 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88492 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88532 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88579 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88736 00:26:56.282 Removing: /var/run/dpdk/spdk_pid88889 00:26:56.282 Removing: /var/run/dpdk/spdk_pid89142 00:26:56.282 Removing: /var/run/dpdk/spdk_pid89259 00:26:56.282 Removing: /var/run/dpdk/spdk_pid89683 00:26:56.282 Removing: /var/run/dpdk/spdk_pid90062 00:26:56.282 Removing: /var/run/dpdk/spdk_pid90064 00:26:56.282 Removing: /var/run/dpdk/spdk_pid92300 00:26:56.282 Removing: /var/run/dpdk/spdk_pid92607 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93096 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93104 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93437 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93452 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93472 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93497 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93502 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93649 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93656 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93759 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93765 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93869 00:26:56.282 Removing: /var/run/dpdk/spdk_pid93877 00:26:56.282 Removing: /var/run/dpdk/spdk_pid94348 00:26:56.282 Removing: /var/run/dpdk/spdk_pid94391 00:26:56.282 Removing: /var/run/dpdk/spdk_pid94548 00:26:56.282 Removing: /var/run/dpdk/spdk_pid94668 00:26:56.282 Removing: /var/run/dpdk/spdk_pid95066 00:26:56.282 Removing: /var/run/dpdk/spdk_pid95318 00:26:56.282 Removing: /var/run/dpdk/spdk_pid95802 00:26:56.282 Removing: /var/run/dpdk/spdk_pid96357 00:26:56.282 Removing: /var/run/dpdk/spdk_pid96820 00:26:56.282 Removing: /var/run/dpdk/spdk_pid96915 00:26:56.282 Removing: /var/run/dpdk/spdk_pid97001 00:26:56.282 Removing: /var/run/dpdk/spdk_pid97086 00:26:56.282 Removing: /var/run/dpdk/spdk_pid97243 00:26:56.282 Removing: /var/run/dpdk/spdk_pid97332 00:26:56.282 Removing: /var/run/dpdk/spdk_pid97418 00:26:56.540 Removing: /var/run/dpdk/spdk_pid97513 00:26:56.540 Removing: /var/run/dpdk/spdk_pid97856 00:26:56.540 Removing: /var/run/dpdk/spdk_pid98553 00:26:56.540 Removing: /var/run/dpdk/spdk_pid99907 00:26:56.540 Clean 00:26:56.540 killing process with pid 61831 00:26:56.540 killing process with pid 61833 00:26:56.540 18:43:03 -- common/autotest_common.sh@1436 -- # return 0 00:26:56.540 18:43:03 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:56.540 18:43:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.540 18:43:03 -- common/autotest_common.sh@10 -- # set +x 00:26:56.540 18:43:03 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:56.540 18:43:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.540 18:43:03 -- common/autotest_common.sh@10 -- # set +x 00:26:56.540 18:43:03 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:56.540 18:43:03 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:56.540 18:43:03 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:56.540 18:43:03 -- spdk/autotest.sh@394 -- # hash lcov 00:26:56.540 18:43:03 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:56.540 18:43:03 -- spdk/autotest.sh@396 -- # hostname 00:26:56.540 18:43:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:56.798 geninfo: WARNING: invalid characters removed from testname! 00:27:18.719 18:43:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.252 18:43:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:23.784 18:43:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:26.317 18:43:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.845 18:43:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:31.380 18:43:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:33.920 18:43:40 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:33.920 18:43:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:33.920 18:43:40 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:33.920 18:43:40 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.920 18:43:40 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.920 18:43:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.920 18:43:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.920 18:43:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.920 18:43:40 -- paths/export.sh@5 -- $ export PATH 00:27:33.920 18:43:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.920 18:43:40 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:33.920 18:43:40 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:33.920 18:43:40 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720982620.XXXXXX 00:27:33.920 18:43:40 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720982620.Akfq4x 00:27:33.920 18:43:40 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:33.920 18:43:40 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:27:33.920 18:43:40 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:33.920 18:43:40 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:33.920 18:43:40 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:33.920 18:43:40 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:33.920 18:43:40 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:33.920 18:43:40 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:33.920 18:43:40 -- common/autotest_common.sh@10 -- $ set +x 00:27:33.920 18:43:40 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:33.920 18:43:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:33.920 18:43:40 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:33.920 18:43:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:33.920 18:43:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:33.920 18:43:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:33.920 18:43:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:33.920 18:43:40 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:33.920 18:43:40 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:33.920 18:43:40 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:33.920 18:43:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:33.920 + [[ -n 5977 ]] 00:27:33.920 + sudo kill 5977 00:27:33.932 [Pipeline] } 00:27:33.953 [Pipeline] // timeout 00:27:33.957 [Pipeline] } 00:27:33.977 [Pipeline] // stage 00:27:33.982 [Pipeline] } 00:27:34.000 [Pipeline] // catchError 00:27:34.010 [Pipeline] stage 00:27:34.013 [Pipeline] { (Stop VM) 00:27:34.027 [Pipeline] sh 00:27:34.306 + vagrant halt 00:27:37.591 ==> default: Halting domain... 00:27:44.158 [Pipeline] sh 00:27:44.441 + vagrant destroy -f 00:27:47.727 ==> default: Removing domain... 00:27:47.740 [Pipeline] sh 00:27:48.019 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:48.029 [Pipeline] } 00:27:48.048 [Pipeline] // stage 00:27:48.053 [Pipeline] } 00:27:48.071 [Pipeline] // dir 00:27:48.076 [Pipeline] } 00:27:48.094 [Pipeline] // wrap 00:27:48.100 [Pipeline] } 00:27:48.116 [Pipeline] // catchError 00:27:48.125 [Pipeline] stage 00:27:48.128 [Pipeline] { (Epilogue) 00:27:48.143 [Pipeline] sh 00:27:48.479 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:55.047 [Pipeline] catchError 00:27:55.048 [Pipeline] { 00:27:55.059 [Pipeline] sh 00:27:55.331 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:55.331 Artifacts sizes are good 00:27:55.337 [Pipeline] } 00:27:55.352 [Pipeline] // catchError 00:27:55.361 [Pipeline] archiveArtifacts 00:27:55.367 Archiving artifacts 00:27:55.533 [Pipeline] cleanWs 00:27:55.544 [WS-CLEANUP] Deleting project workspace... 00:27:55.544 [WS-CLEANUP] Deferred wipeout is used... 00:27:55.550 [WS-CLEANUP] done 00:27:55.552 [Pipeline] } 00:27:55.569 [Pipeline] // stage 00:27:55.574 [Pipeline] } 00:27:55.587 [Pipeline] // node 00:27:55.593 [Pipeline] End of Pipeline 00:27:55.626 Finished: SUCCESS